2025-08-14T21:22:52.8483181Z Current runner version: '2.328.0' 2025-08-14T21:22:52.8488387Z Runner name: 'i-06c8ea4ed8741f176' 2025-08-14T21:22:52.8489284Z Runner group name: 'default' 2025-08-14T21:22:52.8490039Z Machine name: 'ip-10-0-19-47' 2025-08-14T21:22:52.8492231Z ##[group]GITHUB_TOKEN Permissions 2025-08-14T21:22:52.8494495Z Contents: read 2025-08-14T21:22:52.8494992Z Metadata: read 2025-08-14T21:22:52.8495430Z ##[endgroup] 2025-08-14T21:22:52.8497349Z Secret source: Actions 2025-08-14T21:22:52.8497993Z Prepare workflow directory 2025-08-14T21:22:52.8900019Z Prepare all required actions 2025-08-14T21:22:52.8933937Z Getting action download info 2025-08-14T21:22:53.1685281Z Download action repository 'pytorch/test-infra@main' (SHA:83f58f391e939c10dcb8cb6d745e4cefa3b98a84) 2025-08-14T21:22:54.6592712Z Download action repository 'pytorch/pytorch@main' (SHA:3be70dc30e893b552fc0f23ca06cd8f7949b6d08) 2025-08-14T21:23:10.4587597Z Download action repository 'actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065' (SHA:a26af69be951a213d495a4c3e4e4022e16d87065) 2025-08-14T21:23:10.8027882Z Download action repository 'aws-actions/configure-aws-credentials@ececac1a45f3b08a01d2dd070d28d111c5fe6722' (SHA:ececac1a45f3b08a01d2dd070d28d111c5fe6722) 2025-08-14T21:23:11.0463431Z Download action repository 'aws-actions/amazon-ecr-login@062b18b96a7aff071d4dc91bc00c4c1a7945b076' (SHA:062b18b96a7aff071d4dc91bc00c4c1a7945b076) 2025-08-14T21:23:11.6827230Z Download action repository 'seemethere/upload-artifact-s3@baba72d0712b404f646cebe0730933554ebce96a' (SHA:baba72d0712b404f646cebe0730933554ebce96a) 2025-08-14T21:23:12.5485866Z Getting action download info 2025-08-14T21:23:12.6418205Z Download action repository 'actions/checkout@v4' (SHA:08eba0b27e820071cde6df949e0beb9ba4906955) 2025-08-14T21:23:13.2607301Z Getting action download info 2025-08-14T21:23:13.4470294Z Download action repository 'nick-fields/retry@v3.0.0' (SHA:7152eba30c6575329ac0576536151aca5a72780e) 2025-08-14T21:23:14.0238591Z Getting action download info 2025-08-14T21:23:14.1364135Z Download action repository 'nick-fields/retry@3e91a01664abd3c5cd539100d10d33b9c5b68482' (SHA:3e91a01664abd3c5cd539100d10d33b9c5b68482) 2025-08-14T21:23:14.8226164Z Getting action download info 2025-08-14T21:23:14.9340791Z Uses: pytorch/pytorch/.github/workflows/_linux-test.yml@refs/heads/main (1fc683cf17c8c673044538d10266c00f92987be2) 2025-08-14T21:23:14.9344348Z ##[group] Inputs 2025-08-14T21:23:14.9344642Z build-environment: linux-jammy-py3.9-gcc11-build 2025-08-14T21:23:14.9349165Z test-matrix: {"include": [{"config": "cpu_inductor_freezing_huggingface", "shard": 1, "num_shards": 1, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_freezing_timm", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_freezing_timm", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_freezing_torchbench", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_freezing_torchbench", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_amp_freezing_huggingface", "shard": 1, "num_shards": 1, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_amp_freezing_timm", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_amp_freezing_timm", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_amp_freezing_torchbench", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_amp_freezing_torchbench", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_aot_inductor_freezing_huggingface", "shard": 1, "num_shards": 1, "runner": "linux.8xlarge.amx"}, {"config": "cpu_aot_inductor_freezing_timm", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_aot_inductor_freezing_timm", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_aot_inductor_freezing_torchbench", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_aot_inductor_freezing_torchbench", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_aot_inductor_amp_freezing_torchbench", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_aot_inductor_amp_freezing_torchbench", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_aot_inductor_freezing_torchbench", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_aot_inductor_freezing_torchbench", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_aot_inductor_amp_freezing_torchbench", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_aot_inductor_amp_freezing_torchbench", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}]} 2025-08-14T21:23:14.9354109Z docker-image: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T21:23:14.9354748Z sync-tag: 2025-08-14T21:23:14.9355496Z timeout-minutes: 240 2025-08-14T21:23:14.9355711Z use-gha: 2025-08-14T21:23:14.9355898Z dashboard-tag: 2025-08-14T21:23:14.9356100Z s3-bucket: gha-artifacts 2025-08-14T21:23:14.9356310Z aws-role-to-assume: 2025-08-14T21:23:14.9356708Z disable-monitor: false 2025-08-14T21:23:14.9356941Z monitor-log-interval: 5 2025-08-14T21:23:14.9357181Z monitor-data-collect-interval: 1 2025-08-14T21:23:14.9357422Z ##[endgroup] 2025-08-14T21:23:14.9358024Z Complete job name: linux-jammy-cpu-py3.9-gcc11-inductor / test (cpu_inductor_freezing_huggingface, 1, 1, linux.8xlarge.amx) 2025-08-14T21:23:14.9881537Z A job started hook has been configured by the self-hosted runner administrator 2025-08-14T21:23:14.9965207Z ##[group]Run '/home/ec2-user/runner-scripts/before_job.sh' 2025-08-14T21:23:14.9972493Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:23:14.9972929Z ##[endgroup] 2025-08-14T21:23:16.0959425Z Runner Type: linux.8xlarge.amx 2025-08-14T21:23:16.0960005Z Instance Type: m7i-flex.8xlarge 2025-08-14T21:23:16.0960401Z AMI Name: unknown 2025-08-14T21:23:16.0988159Z AMI ID: ami-05ffe3c48a9991133 2025-08-14T21:23:20.9414829Z ##[group]Run pytorch/test-infra/.github/actions/setup-ssh@main 2025-08-14T21:23:20.9415360Z with: 2025-08-14T21:23:20.9416018Z github-secret: *** 2025-08-14T21:23:20.9416523Z instructions: All testing is done inside the container, to start an interactive session run: docker exec -it $(docker container ps --format '{{.ID}}') bash 2025-08-14T21:23:20.9417129Z activate-with-label: false 2025-08-14T21:23:20.9417413Z label: with-ssh 2025-08-14T21:23:20.9417688Z remove-existing-keys: true 2025-08-14T21:23:20.9418016Z fail-silently: true 2025-08-14T21:23:20.9418276Z env: 2025-08-14T21:23:20.9418521Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:23:20.9418779Z ##[endgroup] 2025-08-14T21:23:21.0759351Z Please see https://github.com/pytorch/pytorch/wiki/Debugging-using-with-ssh-for-Github-Actions for more info. 2025-08-14T21:23:21.0760639Z Not on pull request and ciflow reference could not be extracted, skipping adding ssh keys 2025-08-14T21:23:21.1138784Z ##[group]Run pytorch/pytorch/.github/actions/checkout-pytorch@main 2025-08-14T21:23:21.1232049Z with: 2025-08-14T21:23:21.1232290Z no-sudo: true 2025-08-14T21:23:21.1232468Z submodules: recursive 2025-08-14T21:23:21.1232688Z fetch-depth: 0 2025-08-14T21:23:21.1232868Z env: 2025-08-14T21:23:21.1233027Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:23:21.1233222Z ##[endgroup] 2025-08-14T21:23:21.1300158Z ##[group]Run echo "IN_CONTAINER_RUNNER=$(if [ -f /.inarc ] || [ -f /.incontainer ]; then echo true ; else echo false; fi)" >> "$GITHUB_OUTPUT" 2025-08-14T21:23:21.1300841Z echo "IN_CONTAINER_RUNNER=$(if [ -f /.inarc ] || [ -f /.incontainer ]; then echo true ; else echo false; fi)" >> "$GITHUB_OUTPUT" 2025-08-14T21:23:21.1309085Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:23:21.1309356Z env: 2025-08-14T21:23:21.1309730Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:23:21.1309974Z ##[endgroup] 2025-08-14T21:23:21.1397513Z ##[group]Run # Use all available CPUs for fetching 2025-08-14T21:23:21.1397853Z # Use all available CPUs for fetching 2025-08-14T21:23:21.1398077Z cd "${GITHUB_WORKSPACE}" 2025-08-14T21:23:21.1398315Z git config --global fetch.parallel 0 2025-08-14T21:23:21.1398572Z git config --global submodule.fetchJobs 0 2025-08-14T21:23:21.1398796Z  2025-08-14T21:23:21.1399033Z # Clean workspace. The default checkout action should also do this, but 2025-08-14T21:23:21.1399334Z # do it here as well just in case 2025-08-14T21:23:21.1399548Z if [[ -d .git ]]; then 2025-08-14T21:23:21.1399744Z  if [ -z "${NO_SUDO}" ]; then 2025-08-14T21:23:21.1399955Z  sudo git clean -ffdx 2025-08-14T21:23:21.1400144Z  else 2025-08-14T21:23:21.1400302Z  git clean -ffdx 2025-08-14T21:23:21.1400492Z  fi 2025-08-14T21:23:21.1400653Z fi 2025-08-14T21:23:21.1405168Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:23:21.1405434Z env: 2025-08-14T21:23:21.1405615Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:23:21.1405797Z NO_SUDO: true 2025-08-14T21:23:21.1406051Z ##[endgroup] 2025-08-14T21:23:21.1505833Z ##[group]Run actions/checkout@v4 2025-08-14T21:23:21.1506081Z with: 2025-08-14T21:23:21.1506283Z ref: 1fc683cf17c8c673044538d10266c00f92987be2 2025-08-14T21:23:21.1506518Z fetch-depth: 0 2025-08-14T21:23:21.1506702Z submodules: recursive 2025-08-14T21:23:21.1506900Z show-progress: false 2025-08-14T21:23:21.1507094Z repository: pytorch/pytorch 2025-08-14T21:23:21.1507415Z token: *** 2025-08-14T21:23:21.1507585Z ssh-strict: true 2025-08-14T21:23:21.1507757Z ssh-user: git 2025-08-14T21:23:21.1507931Z persist-credentials: true 2025-08-14T21:23:21.1508130Z clean: true 2025-08-14T21:23:21.1508319Z sparse-checkout-cone-mode: true 2025-08-14T21:23:21.1508542Z fetch-tags: false 2025-08-14T21:23:21.1508728Z lfs: false 2025-08-14T21:23:21.1508905Z set-safe-directory: true 2025-08-14T21:23:21.1509103Z env: 2025-08-14T21:23:21.1509273Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:23:21.1509465Z ##[endgroup] 2025-08-14T21:23:21.2535209Z Syncing repository: pytorch/pytorch 2025-08-14T21:23:21.2536409Z ##[group]Getting Git version info 2025-08-14T21:23:21.2536769Z Working directory is '/home/ec2-user/actions-runner/_work/pytorch/pytorch' 2025-08-14T21:23:21.2537250Z [command]/usr/bin/git version 2025-08-14T21:23:21.2756646Z git version 2.47.1 2025-08-14T21:23:21.2774696Z ##[endgroup] 2025-08-14T21:23:21.2788740Z Copying '/home/ec2-user/.gitconfig' to '/home/ec2-user/actions-runner/_work/_temp/c0d1d8d4-6f7e-4113-882e-4cfd1c11e71b/.gitconfig' 2025-08-14T21:23:21.2811391Z Temporarily overriding HOME='/home/ec2-user/actions-runner/_work/_temp/c0d1d8d4-6f7e-4113-882e-4cfd1c11e71b' before making global git config changes 2025-08-14T21:23:21.2812318Z Adding repository directory to the temporary git global config as a safe directory 2025-08-14T21:23:21.2817320Z [command]/usr/bin/git config --global --add safe.directory /home/ec2-user/actions-runner/_work/pytorch/pytorch 2025-08-14T21:23:21.2869563Z Deleting the contents of '/home/ec2-user/actions-runner/_work/pytorch/pytorch' 2025-08-14T21:23:21.2873910Z ##[group]Initializing the repository 2025-08-14T21:23:21.2879461Z [command]/usr/bin/git init /home/ec2-user/actions-runner/_work/pytorch/pytorch 2025-08-14T21:23:21.2940338Z hint: Using 'master' as the name for the initial branch. This default branch name 2025-08-14T21:23:21.2940886Z hint: is subject to change. To configure the initial branch name to use in all 2025-08-14T21:23:21.2941277Z hint: of your new repositories, which will suppress this warning, call: 2025-08-14T21:23:21.2941573Z hint: 2025-08-14T21:23:21.2942044Z hint: git config --global init.defaultBranch 2025-08-14T21:23:21.2942313Z hint: 2025-08-14T21:23:21.2942565Z hint: Names commonly chosen instead of 'master' are 'main', 'trunk' and 2025-08-14T21:23:21.2943198Z hint: 'development'. The just-created branch can be renamed via this command: 2025-08-14T21:23:21.2943493Z hint: 2025-08-14T21:23:21.2943675Z hint: git branch -m 2025-08-14T21:23:21.2963251Z Initialized empty Git repository in /home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/ 2025-08-14T21:23:21.2972599Z [command]/usr/bin/git remote add origin https://github.com/pytorch/pytorch 2025-08-14T21:23:21.3014682Z ##[endgroup] 2025-08-14T21:23:21.3015285Z ##[group]Disabling automatic garbage collection 2025-08-14T21:23:21.3017957Z [command]/usr/bin/git config --local gc.auto 0 2025-08-14T21:23:21.3045836Z ##[endgroup] 2025-08-14T21:23:21.3046189Z ##[group]Setting up auth 2025-08-14T21:23:21.3051197Z [command]/usr/bin/git config --local --name-only --get-regexp core\.sshCommand 2025-08-14T21:23:21.3084707Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'core\.sshCommand' && git config --local --unset-all 'core.sshCommand' || :" 2025-08-14T21:23:21.3470657Z [command]/usr/bin/git config --local --name-only --get-regexp http\.https\:\/\/github\.com\/\.extraheader 2025-08-14T21:23:21.3497355Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'http\.https\:\/\/github\.com\/\.extraheader' && git config --local --unset-all 'http.https://github.com/.extraheader' || :" 2025-08-14T21:23:21.3834716Z [command]/usr/bin/git config --local http.https://github.com/.extraheader AUTHORIZATION: basic *** 2025-08-14T21:23:21.3902455Z ##[endgroup] 2025-08-14T21:23:21.3902924Z ##[group]Fetching the repository 2025-08-14T21:23:21.3909330Z [command]/usr/bin/git -c protocol.version=2 fetch --prune --no-recurse-submodules origin +refs/heads/*:refs/remotes/origin/* +refs/tags/*:refs/tags/* 2025-08-14T21:24:05.8237609Z From https://github.com/pytorch/pytorch 2025-08-14T21:24:05.8238041Z * [new branch] 2.6.0.dev20241004+ -> origin/2.6.0.dev20241004+ 2025-08-14T21:24:05.8238569Z * [new branch] 5addvllmbuild -> origin/5addvllmbuild 2025-08-14T21:24:05.8238996Z * [new branch] AaronWang04_addmmfusion_perftest -> origin/AaronWang04_addmmfusion_perftest 2025-08-14T21:24:05.8239453Z * [new branch] HDCharles-2.6.0-release-notes -> origin/HDCharles-2.6.0-release-notes 2025-08-14T21:24:05.8239925Z * [new branch] JackCaoG/dynamo_make_fx_non_core_aten_ops -> origin/JackCaoG/dynamo_make_fx_non_core_aten_ops 2025-08-14T21:24:05.8240384Z * [new branch] PR-AOTInductorNoneBug -> origin/PR-AOTInductorNoneBug 2025-08-14T21:24:05.8240801Z * [new branch] PR-AOTInductorNoneBugFix -> origin/PR-AOTInductorNoneBugFix 2025-08-14T21:24:05.8241199Z * [new branch] PR-FixConfigsIssue -> origin/PR-FixConfigsIssue 2025-08-14T21:24:05.8241574Z * [new branch] PR-NoneBugFix-viable -> origin/PR-NoneBugFix-viable 2025-08-14T21:24:05.8242113Z * [new branch] PR-ResetToZero -> origin/PR-ResetToZero 2025-08-14T21:24:05.8243002Z * [new branch] Update-Flash-Packaging -> origin/Update-Flash-Packaging 2025-08-14T21:24:05.8243479Z * [new branch] add-missing-args-normalization -> origin/add-missing-args-normalization 2025-08-14T21:24:05.8243932Z * [new branch] add-user-guide-structure -> origin/add-user-guide-structure 2025-08-14T21:24:05.8244301Z * [new branch] addVllmPin -> origin/addVllmPin 2025-08-14T21:24:05.8244669Z * [new branch] add_windows_testing_back -> origin/add_windows_testing_back 2025-08-14T21:24:05.8245025Z * [new branch] addbuildvllm -> origin/addbuildvllm 2025-08-14T21:24:05.8246599Z * [new branch] addmm-heuristic -> origin/addmm-heuristic 2025-08-14T21:24:05.8247105Z * [new branch] addsimde -> origin/addsimde 2025-08-14T21:24:05.8247576Z * [new branch] addvllpinnedfile -> origin/addvllpinnedfile 2025-08-14T21:24:05.8248768Z * [new branch] adi/acl_upgrade -> origin/adi/acl_upgrade 2025-08-14T21:24:05.8249270Z * [new branch] adi/skip_slow_tests -> origin/adi/skip_slow_tests 2025-08-14T21:24:05.8249746Z * [new branch] adi/test -> origin/adi/test 2025-08-14T21:24:05.8250189Z * [new branch] adi/test_bgemm -> origin/adi/test_bgemm 2025-08-14T21:24:05.8256101Z * [new branch] adi/test_fusions -> origin/adi/test_fusions 2025-08-14T21:24:05.8256658Z * [new branch] adi/test_onednn_v3.9 -> origin/adi/test_onednn_v3.9 2025-08-14T21:24:05.8257177Z * [new branch] adi/test_presve_change -> origin/adi/test_presve_change 2025-08-14T21:24:05.8258444Z * [new branch] adi/test_timm -> origin/adi/test_timm 2025-08-14T21:24:05.8259035Z * [new branch] adi/testpresve_change -> origin/adi/testpresve_change 2025-08-14T21:24:05.8259546Z * [new branch] aditew01/test/vec_bf16 -> origin/aditew01/test/vec_bf16 2025-08-14T21:24:05.8260181Z * [new branch] ah-globalfeedback-hook -> origin/ah-globalfeedback-hook 2025-08-14T21:24:05.8260605Z * [new branch] albanD-patch-1 -> origin/albanD-patch-1 2025-08-14T21:24:05.8260959Z * [new branch] alt-disable -> origin/alt-disable 2025-08-14T21:24:05.8261357Z * [new branch] angelayi/aoti_additional_files -> origin/angelayi/aoti_additional_files 2025-08-14T21:24:05.8261776Z * [new branch] angelayi/aoti_inductor_fx -> origin/angelayi/aoti_inductor_fx 2025-08-14T21:24:05.8262240Z * [new branch] angelayi/assert_tensor_metadata_device -> origin/angelayi/assert_tensor_metadata_device 2025-08-14T21:24:05.8266853Z * [new branch] angelayi/benchmark -> origin/angelayi/benchmark 2025-08-14T21:24:05.8272059Z * [new branch] angelayi/benchmark2 -> origin/angelayi/benchmark2 2025-08-14T21:24:05.8276916Z * [new branch] angelayi/change_pytree_serialization -> origin/angelayi/change_pytree_serialization 2025-08-14T21:24:05.8281741Z * [new branch] angelayi/cpp_loader -> origin/angelayi/cpp_loader 2025-08-14T21:24:05.8286194Z * [new branch] angelayi/custom_op_subgraph -> origin/angelayi/custom_op_subgraph 2025-08-14T21:24:05.8292231Z * [new branch] angelayi/customop -> origin/angelayi/customop 2025-08-14T21:24:05.8297158Z * [new branch] angelayi/del_lib -> origin/angelayi/del_lib 2025-08-14T21:24:05.8297648Z * [new branch] angelayi/docs -> origin/angelayi/docs 2025-08-14T21:24:05.8297987Z * [new branch] angelayi/docs2 -> origin/angelayi/docs2 2025-08-14T21:24:05.8298334Z * [new branch] angelayi/fix_pt2 -> origin/angelayi/fix_pt2 2025-08-14T21:24:05.8298920Z * [new branch] angelayi/logging.bak -> origin/angelayi/logging.bak 2025-08-14T21:24:05.8299283Z * [new branch] angelayi/logging2 -> origin/angelayi/logging2 2025-08-14T21:24:05.8299654Z * [new branch] angelayi/no_so_weight -> origin/angelayi/no_so_weight 2025-08-14T21:24:05.8300212Z * [new branch] angelayi/pytree -> origin/angelayi/pytree 2025-08-14T21:24:05.8300576Z * [new branch] angelayi/save_error -> origin/angelayi/save_error 2025-08-14T21:24:05.8300945Z * [new branch] angelayi/scan_layers -> origin/angelayi/scan_layers 2025-08-14T21:24:05.8301307Z * [new branch] angelayi/symint_input -> origin/angelayi/symint_input 2025-08-14T21:24:05.8301701Z * [new branch] angelayi/tensor_nn_module_meta -> origin/angelayi/tensor_nn_module_meta 2025-08-14T21:24:05.8302128Z * [new branch] angelayi/torch_size -> origin/angelayi/torch_size 2025-08-14T21:24:05.8302453Z * [new branch] aoti-cuda-alloc -> origin/aoti-cuda-alloc 2025-08-14T21:24:05.8302773Z * [new branch] aoti_weight_sharing -> origin/aoti_weight_sharing 2025-08-14T21:24:05.8303110Z * [new branch] arsh/symint_mm_ind_decomp -> origin/arsh/symint_mm_ind_decomp 2025-08-14T21:24:05.8303483Z * [new branch] atalman-inductor-perf-cu124 -> origin/atalman-inductor-perf-cu124 2025-08-14T21:24:05.8303899Z * [new branch] atalman-inductor-perf-cu124.1 -> origin/atalman-inductor-perf-cu124.1 2025-08-14T21:24:05.8304278Z * [new branch] atalman-patch-1 -> origin/atalman-patch-1 2025-08-14T21:24:05.8304602Z * [new branch] atalman-patch-2 -> origin/atalman-patch-2 2025-08-14T21:24:05.8304904Z * [new branch] atalman-patch-3 -> origin/atalman-patch-3 2025-08-14T21:24:05.8305223Z * [new branch] atalman-patch-6 -> origin/atalman-patch-6 2025-08-14T21:24:05.8305530Z * [new branch] atalman-patch-7 -> origin/atalman-patch-7 2025-08-14T21:24:05.8305868Z * [new branch] atalman-patch-8 -> origin/atalman-patch-8 2025-08-14T21:24:05.8306191Z * [new branch] atalman_inductor_2.3.0 -> origin/atalman_inductor_2.3.0 2025-08-14T21:24:05.8306529Z * [new branch] atalman_inductor_2.3.1 -> origin/atalman_inductor_2.3.1 2025-08-14T21:24:05.8306881Z * [new branch] atalman_inductor_2.4.0 -> origin/atalman_inductor_2.4.0 2025-08-14T21:24:05.8307237Z * [new branch] atalman_inductor_2.4.x -> origin/atalman_inductor_2.4.x 2025-08-14T21:24:05.8307674Z * [new branch] autoupdate-transformers-pin-via-pr -> origin/autoupdate-transformers-pin-via-pr 2025-08-14T21:24:05.8308087Z * [new branch] backupvllm -> origin/backupvllm 2025-08-14T21:24:05.8308401Z * [new branch] base/1.5 -> origin/base/1.5 2025-08-14T21:24:05.8308775Z * [new branch] batching_sdpa_efficient_attention -> origin/batching_sdpa_efficient_attention 2025-08-14T21:24:05.8309166Z * [new branch] benchmark-updates -> origin/benchmark-updates 2025-08-14T21:24:05.8309532Z * [new branch] benchmarking-script -> origin/benchmarking-script 2025-08-14T21:24:05.8310020Z * [new branch] benjaminglass1/mark-large-tensor-tests-serial -> origin/benjaminglass1/mark-large-tensor-tests-serial 2025-08-14T21:24:05.8310501Z * [new branch] bertmaher/pinbump26 -> origin/bertmaher/pinbump26 2025-08-14T21:24:05.8310850Z * [new branch] bertrand/cutlass -> origin/bertrand/cutlass 2025-08-14T21:24:05.8311177Z * [new branch] bf/cg-log -> origin/bf/cg-log 2025-08-14T21:24:05.8311505Z * [new branch] bf/cg-remove-check -> origin/bf/cg-remove-check 2025-08-14T21:24:05.8311897Z * [new branch] bf/cg-skip-1-kernel -> origin/bf/cg-skip-1-kernel 2025-08-14T21:24:05.8312240Z * [new branch] bf/cudagraph -> origin/bf/cudagraph 2025-08-14T21:24:05.8312663Z * [new branch] bf/cudagraph-disable-input-mutation -> origin/bf/cudagraph-disable-input-mutation 2025-08-14T21:24:05.8313303Z * [new branch] bf/cudagraph-enable-input-mutation-support-benchmark -> origin/bf/cudagraph-enable-input-mutation-support-benchmark 2025-08-14T21:24:05.8313863Z * [new branch] bf/cudagraph-partition -> origin/bf/cudagraph-partition 2025-08-14T21:24:05.8314268Z * [new branch] bf/default-recompile-reason -> origin/bf/default-recompile-reason 2025-08-14T21:24:05.8314668Z * [new branch] bf/donated-buffer-bench -> origin/bf/donated-buffer-bench 2025-08-14T21:24:05.8315119Z * [new branch] bf/improve-kernel-bench -> origin/bf/improve-kernel-bench 2025-08-14T21:24:05.8315487Z * [new branch] bf/kernel-benchmark -> origin/bf/kernel-benchmark 2025-08-14T21:24:05.8315833Z * [new branch] bf/partition-doc -> origin/bf/partition-doc 2025-08-14T21:24:05.8316194Z * [new branch] bf/partition-move-cpu -> origin/bf/partition-move-cpu 2025-08-14T21:24:05.8316560Z * [new branch] bf/partition-turn-on -> origin/bf/partition-turn-on 2025-08-14T21:24:05.8316937Z * [new branch] bf/remove-check-55b0c39d -> origin/bf/remove-check-55b0c39d 2025-08-14T21:24:05.8317291Z * [new branch] bf/skip-asserts -> origin/bf/skip-asserts 2025-08-14T21:24:05.8317612Z * [new branch] bf16adamw -> origin/bf16adamw 2025-08-14T21:24:05.8317968Z * [new branch] bisect_perf_hf_T5_3acc6eac492 -> origin/bisect_perf_hf_T5_3acc6eac492 2025-08-14T21:24:05.8318384Z * [new branch] bisect_perf_hf_T5_3fcf66f61fb -> origin/bisect_perf_hf_T5_3fcf66f61fb 2025-08-14T21:24:05.8318808Z * [new branch] bisect_perf_hf_T5_4009d154129 -> origin/bisect_perf_hf_T5_4009d154129 2025-08-14T21:24:05.8319208Z * [new branch] bisect_perf_hf_T5_40d0740e73d -> origin/bisect_perf_hf_T5_40d0740e73d 2025-08-14T21:24:05.8319601Z * [new branch] bisect_perf_hf_T5_5268754e -> origin/bisect_perf_hf_T5_5268754e 2025-08-14T21:24:05.8319998Z * [new branch] bisect_perf_hf_T5_7d89a8d385c -> origin/bisect_perf_hf_T5_7d89a8d385c 2025-08-14T21:24:05.8320404Z * [new branch] bisect_perf_hf_T5_b7a25c1ee7c -> origin/bisect_perf_hf_T5_b7a25c1ee7c 2025-08-14T21:24:05.8320815Z * [new branch] bisect_perf_hf_T5_c25b201583f -> origin/bisect_perf_hf_T5_c25b201583f 2025-08-14T21:24:05.8321229Z * [new branch] bisect_perf_hf_T5_c93e57efac0 -> origin/bisect_perf_hf_T5_c93e57efac0 2025-08-14T21:24:05.8321590Z * [new branch] bisect_perf_hf_T5_ca9813ea149 -> origin/bisect_perf_hf_T5_ca9813ea149 2025-08-14T21:24:05.8321942Z * [new branch] bisect_perf_hf_T5_d65f194a -> origin/bisect_perf_hf_T5_d65f194a 2025-08-14T21:24:05.8322288Z * [new branch] bisect_perf_hf_T5_da94ab0b -> origin/bisect_perf_hf_T5_da94ab0b 2025-08-14T21:24:05.8322646Z * [new branch] bisect_perf_hf_T5_da94ab0b_new -> origin/bisect_perf_hf_T5_da94ab0b_new 2025-08-14T21:24:05.8323006Z * [new branch] bisect_perf_hf_T5_db4e8a1d8a8 -> origin/bisect_perf_hf_T5_db4e8a1d8a8 2025-08-14T21:24:05.8323360Z * [new branch] bisect_perf_hf_T5_e0d97e936a2 -> origin/bisect_perf_hf_T5_e0d97e936a2 2025-08-14T21:24:05.8323714Z * [new branch] bisect_perf_hf_T5_f23621ec563 -> origin/bisect_perf_hf_T5_f23621ec563 2025-08-14T21:24:05.8324087Z * [new branch] bowbao/bench_updates_stage -> origin/bowbao/bench_updates_stage 2025-08-14T21:24:05.8324430Z * [new branch] bowbao/dort_rewriter -> origin/bowbao/dort_rewriter 2025-08-14T21:24:05.8324801Z * [new branch] bowbao/wip_prs -> origin/bowbao/wip_prs 2025-08-14T21:24:05.8325168Z * [new branch] bowenbao/partial_min_max_reduce -> origin/bowenbao/partial_min_max_reduce 2025-08-14T21:24:05.8325559Z * [new branch] brister/always_wrapper_ir -> origin/brister/always_wrapper_ir 2025-08-14T21:24:05.8325913Z * [new branch] brister/flatten_contig -> origin/brister/flatten_contig 2025-08-14T21:24:05.8326293Z * [new branch] brister/test_block_ptr_same -> origin/brister/test_block_ptr_same 2025-08-14T21:24:05.8326727Z * [new branch] brister/tiled_reduction_no_numel_check -> origin/brister/tiled_reduction_no_numel_check 2025-08-14T21:24:05.8327123Z * [new branch] c57382a49 -> origin/c57382a49 2025-08-14T21:24:05.8327418Z * [new branch] ca_0431d47eaa -> origin/ca_0431d47eaa 2025-08-14T21:24:05.8327764Z * [new branch] ca_fix_0431d47eaa -> origin/ca_fix_0431d47eaa 2025-08-14T21:24:05.8328355Z * [new branch] camyll/revert-94bc900da97ad7f3c35b3b819bb53b23c74b581a-for-release-2.8 -> origin/camyll/revert-94bc900da97ad7f3c35b3b819bb53b23c74b581a-for-release-2.8 2025-08-14T21:24:05.8328997Z * [new branch] camyll/test_precommit_hooks_lintrunner -> origin/camyll/test_precommit_hooks_lintrunner 2025-08-14T21:24:05.8329489Z * [new branch] camyllh/cherrypick-151547-for-release28 -> origin/camyllh/cherrypick-151547-for-release28 2025-08-14T21:24:05.8330862Z * [new branch] camyllh/test_setup_hooks_push -> origin/camyllh/test_setup_hooks_push 2025-08-14T21:24:05.8331333Z * [new branch] cherry-pick-149654-by-pytorch_bot_bot_ -> origin/cherry-pick-149654-by-pytorch_bot_bot_ 2025-08-14T21:24:05.8331807Z * [new branch] cherry-pick-151939-by-pytorch_bot_bot_ -> origin/cherry-pick-151939-by-pytorch_bot_bot_ 2025-08-14T21:24:05.8332271Z * [new branch] cherry-pick-154174-by-pytorch_bot_bot_ -> origin/cherry-pick-154174-by-pytorch_bot_bot_ 2025-08-14T21:24:05.8332733Z * [new branch] cherry-pick-155896-by-pytorch_bot_bot_ -> origin/cherry-pick-155896-by-pytorch_bot_bot_ 2025-08-14T21:24:05.8333216Z * [new branch] cherry-pick-156260-by-pytorch_bot_bot_ -> origin/cherry-pick-156260-by-pytorch_bot_bot_ 2025-08-14T21:24:05.8333707Z * [new branch] cherry-pick-156719-by-pytorch_bot_bot_ -> origin/cherry-pick-156719-by-pytorch_bot_bot_ 2025-08-14T21:24:05.8334188Z * [new branch] cherry-pick-156876-by-pytorch_bot_bot_ -> origin/cherry-pick-156876-by-pytorch_bot_bot_ 2025-08-14T21:24:05.8334683Z * [new branch] cherry-pick-156888-by-pytorch_bot_bot_ -> origin/cherry-pick-156888-by-pytorch_bot_bot_ 2025-08-14T21:24:05.8335173Z * [new branch] cherry-pick-157014-by-pytorch_bot_bot_ -> origin/cherry-pick-157014-by-pytorch_bot_bot_ 2025-08-14T21:24:05.8335670Z * [new branch] cherry-pick-157179-by-pytorch_bot_bot_ -> origin/cherry-pick-157179-by-pytorch_bot_bot_ 2025-08-14T21:24:05.8336154Z * [new branch] cherry-pick-157453-by-pytorch_bot_bot_ -> origin/cherry-pick-157453-by-pytorch_bot_bot_ 2025-08-14T21:24:05.8336646Z * [new branch] cherry-pick-157513-by-pytorch_bot_bot_ -> origin/cherry-pick-157513-by-pytorch_bot_bot_ 2025-08-14T21:24:05.8337130Z * [new branch] cherry-pick-157558-by-pytorch_bot_bot_ -> origin/cherry-pick-157558-by-pytorch_bot_bot_ 2025-08-14T21:24:05.8337645Z * [new branch] cherry-pick-157598-by-pytorch_bot_bot_ -> origin/cherry-pick-157598-by-pytorch_bot_bot_ 2025-08-14T21:24:05.8338133Z * [new branch] cherry-pick-157600-by-pytorch_bot_bot_ -> origin/cherry-pick-157600-by-pytorch_bot_bot_ 2025-08-14T21:24:05.8338626Z * [new branch] cherry-pick-157630-by-pytorch_bot_bot_ -> origin/cherry-pick-157630-by-pytorch_bot_bot_ 2025-08-14T21:24:05.8339167Z * [new branch] cherry-pick-157695-by-pytorch_bot_bot_ -> origin/cherry-pick-157695-by-pytorch_bot_bot_ 2025-08-14T21:24:05.8339652Z * [new branch] cherry-pick-157732-by-pytorch_bot_bot_ -> origin/cherry-pick-157732-by-pytorch_bot_bot_ 2025-08-14T21:24:05.8340238Z * [new branch] cherry-pick-157733-by-pytorch_bot_bot_ -> origin/cherry-pick-157733-by-pytorch_bot_bot_ 2025-08-14T21:24:05.8340740Z * [new branch] cherry-pick-157985-by-pytorch_bot_bot_ -> origin/cherry-pick-157985-by-pytorch_bot_bot_ 2025-08-14T21:24:05.8341283Z * [new branch] cherry-pick-157993-by-pytorch_bot_bot_ -> origin/cherry-pick-157993-by-pytorch_bot_bot_ 2025-08-14T21:24:05.8342068Z * [new branch] cherry-pick-158064-by-pytorch_bot_bot_ -> origin/cherry-pick-158064-by-pytorch_bot_bot_ 2025-08-14T21:24:05.8342556Z * [new branch] cherry-pick-158152-by-pytorch_bot_bot_ -> origin/cherry-pick-158152-by-pytorch_bot_bot_ 2025-08-14T21:24:05.8343267Z * [new branch] cherry-pick-158295-by-pytorch_bot_bot_ -> origin/cherry-pick-158295-by-pytorch_bot_bot_ 2025-08-14T21:24:05.8343752Z * [new branch] cherry-pick-158301-by-pytorch_bot_bot_ -> origin/cherry-pick-158301-by-pytorch_bot_bot_ 2025-08-14T21:24:05.8344249Z * [new branch] cherry-pick-158537-by-pytorch_bot_bot_ -> origin/cherry-pick-158537-by-pytorch_bot_bot_ 2025-08-14T21:24:05.8344730Z * [new branch] cherry-pick-158572-by-pytorch_bot_bot_ -> origin/cherry-pick-158572-by-pytorch_bot_bot_ 2025-08-14T21:24:05.8345166Z * [new branch] cherry-pick-158595 -> origin/cherry-pick-158595 2025-08-14T21:24:05.8350411Z * [new branch] cherry-pick-159181-by-pytorch_bot_bot_ -> origin/cherry-pick-159181-by-pytorch_bot_bot_ 2025-08-14T21:24:05.8352319Z * [new branch] cherry-pick-159969-by-pytorch_bot_bot_ -> origin/cherry-pick-159969-by-pytorch_bot_bot_ 2025-08-14T21:24:05.8353041Z * [new branch] cherry-pick-160586-by-pytorch_bot_bot_ -> origin/cherry-pick-160586-by-pytorch_bot_bot_ 2025-08-14T21:24:05.8358300Z * [new branch] cherry-pick-PR-158746 -> origin/cherry-pick-PR-158746 2025-08-14T21:24:05.8363152Z * [new branch] cherrypick-e4e2701429c17078c3c475382a8b1fa4c8a8cefc -> origin/cherrypick-e4e2701429c17078c3c475382a8b1fa4c8a8cefc 2025-08-14T21:24:05.8363850Z * [new branch] chilli/flex_vllm -> origin/chilli/flex_vllm 2025-08-14T21:24:05.8364943Z * [new branch] ckluk2-compileThread-1 -> origin/ckluk2-compileThread-1 2025-08-14T21:24:05.8365556Z * [new branch] ckluk2-compileThread-2 -> origin/ckluk2-compileThread-2 2025-08-14T21:24:05.8365937Z * [new branch] ckluk2-compileThread-64 -> origin/ckluk2-compileThread-64 2025-08-14T21:24:05.8366297Z * [new branch] ckluk2-test-1 -> origin/ckluk2-test-1 2025-08-14T21:24:05.8366628Z * [new branch] cleantest1 -> origin/cleantest1 2025-08-14T21:24:05.8366941Z * [new branch] codex-testing -> origin/codex-testing 2025-08-14T21:24:05.8367439Z * [new branch] codex/create-test-for-tensor-memory-leak-in-cudagraph -> origin/codex/create-test-for-tensor-memory-leak-in-cudagraph 2025-08-14T21:24:05.8368084Z * [new branch] codex/fix-issue-121219-in-pytorch -> origin/codex/fix-issue-121219-in-pytorch 2025-08-14T21:24:05.8368513Z * [new branch] codex/fix-issue-160415-in-pytorch -> origin/codex/fix-issue-160415-in-pytorch 2025-08-14T21:24:05.8369019Z * [new branch] codex/fix-noqengine-quantized-engine-support -> origin/codex/fix-noqengine-quantized-engine-support 2025-08-14T21:24:05.8369542Z * [new branch] codex/fix-pin_memory-error-handling -> origin/codex/fix-pin_memory-error-handling 2025-08-14T21:24:05.8370018Z * [new branch] codex/propose-fix-for-issue-160332 -> origin/codex/propose-fix-for-issue-160332 2025-08-14T21:24:05.8370804Z * [new branch] codex/refactor-lintrunner-config-to-use-uv-run -> origin/codex/refactor-lintrunner-config-to-use-uv-run 2025-08-14T21:24:05.8371379Z * [new branch] codex/verify-torch-output-and-log-results -> origin/codex/verify-torch-output-and-log-results 2025-08-14T21:24:05.8371875Z * [new branch] compile_fsdp2_disable_stream_and_event -> origin/compile_fsdp2_disable_stream_and_event 2025-08-14T21:24:05.8372286Z * [new branch] comply-with-setuptools -> origin/comply-with-setuptools 2025-08-14T21:24:05.8372648Z * [new branch] context_test -> origin/context_test 2025-08-14T21:24:05.8372988Z * [new branch] copilot/fix-157446 -> origin/copilot/fix-157446 2025-08-14T21:24:05.8373323Z * [new branch] copilot/fix-159257 -> origin/copilot/fix-159257 2025-08-14T21:24:05.8373703Z * [new branch] copy_graph -> origin/copy_graph 2025-08-14T21:24:05.8374042Z * [new branch] cpio/fix_new_ami_tests -> origin/cpio/fix_new_ami_tests 2025-08-14T21:24:05.8374377Z * [new branch] csl/3_proc_sm -> origin/csl/3_proc_sm 2025-08-14T21:24:05.8374735Z * [new branch] csl/add_file_merge_conflict_csv -> origin/csl/add_file_merge_conflict_csv 2025-08-14T21:24:05.8375156Z * [new branch] csl/always_produce_xml -> origin/csl/always_produce_xml 2025-08-14T21:24:05.8375525Z * [new branch] csl/build_test_more_procs -> origin/csl/build_test_more_procs 2025-08-14T21:24:05.8375906Z * [new branch] csl/build_test_more_procs2 -> origin/csl/build_test_more_procs2 2025-08-14T21:24:05.8376277Z * [new branch] csl/disable_flaky_cpp_test -> origin/csl/disable_flaky_cpp_test 2025-08-14T21:24:05.8376675Z * [new branch] csl/disable_periodic_test -> origin/csl/disable_periodic_test 2025-08-14T21:24:05.8377065Z * [new branch] csl/executorch_docker_fail -> origin/csl/executorch_docker_fail 2025-08-14T21:24:05.8377431Z * [new branch] csl/fix_check_alerts -> origin/csl/fix_check_alerts 2025-08-14T21:24:05.8377752Z * [new branch] csl/katex -> origin/csl/katex 2025-08-14T21:24:05.8378072Z * [new branch] csl/larger_runner -> origin/csl/larger_runner 2025-08-14T21:24:05.8378519Z * [new branch] csl/lintrunner_changed_files_removed -> origin/csl/lintrunner_changed_files_removed 2025-08-14T21:24:05.8379008Z * [new branch] csl/lintrunner_changed_files_removed_test -> origin/csl/lintrunner_changed_files_removed_test 2025-08-14T21:24:05.8379439Z * [new branch] csl/lintrunner_stuff -> origin/csl/lintrunner_stuff 2025-08-14T21:24:05.8379998Z * [new branch] csl/mps_sharding -> origin/csl/mps_sharding 2025-08-14T21:24:05.8380374Z * [new branch] csl/multistage_docker -> origin/csl/multistage_docker 2025-08-14T21:24:05.8380747Z * [new branch] csl/no_keep_goin_rocm -> origin/csl/no_keep_goin_rocm 2025-08-14T21:24:05.8381112Z * [new branch] csl/not_600_timeout -> origin/csl/not_600_timeout 2025-08-14T21:24:05.8381482Z * [new branch] csl/remove_unused_docker_images -> origin/csl/remove_unused_docker_images 2025-08-14T21:24:05.8381879Z * [new branch] csl/revert_open -> origin/csl/revert_open 2025-08-14T21:24:05.8382287Z * [new branch] csl/rocm_upload_artifacts_while_running -> origin/csl/rocm_upload_artifacts_while_running 2025-08-14T21:24:05.8382769Z * [new branch] csl/skip_build -> origin/csl/skip_build 2025-08-14T21:24:05.8383298Z * [new branch] csl/td_dynamo -> origin/csl/td_dynamo 2025-08-14T21:24:05.8383677Z * [new branch] csl/test_cuda_build_large_runner -> origin/csl/test_cuda_build_large_runner 2025-08-14T21:24:05.8384153Z * [new branch] csl/unused_docker -> origin/csl/unused_docker 2025-08-14T21:24:05.8384493Z * [new branch] csl/win_sccache -> origin/csl/win_sccache 2025-08-14T21:24:05.8389557Z * [new branch] cublasltrelax2 -> origin/cublasltrelax2 2025-08-14T21:24:05.8394490Z * [new branch] cublasrelax2 -> origin/cublasrelax2 2025-08-14T21:24:05.8399570Z * [new branch] cudnnsdparefactor -> origin/cudnnsdparefactor 2025-08-14T21:24:05.8404396Z * [new branch] custom_lowering_dict -> origin/custom_lowering_dict 2025-08-14T21:24:05.8408583Z * [new branch] czhuge_muon_dev -> origin/czhuge_muon_dev 2025-08-14T21:24:05.8410490Z * [new branch] d4l3k/delete_hook -> origin/d4l3k/delete_hook 2025-08-14T21:24:05.8411091Z * [new branch] d4l3k/dist_queue -> origin/d4l3k/dist_queue 2025-08-14T21:24:05.8411455Z * [new branch] d4l3k/wait_stream -> origin/d4l3k/wait_stream 2025-08-14T21:24:05.8411838Z * [new branch] dcp-safetensor-test-fix -> origin/dcp-safetensor-test-fix 2025-08-14T21:24:05.8412204Z * [new branch] dcp_zoc -> origin/dcp_zoc 2025-08-14T21:24:05.8412529Z * [new branch] delete-quant-docs -> origin/delete-quant-docs 2025-08-14T21:24:05.8413060Z * [new branch] dependabot/pip/dot-ci/docker/protobuf-5.29.5 -> origin/dependabot/pip/dot-ci/docker/protobuf-5.29.5 2025-08-14T21:24:05.8413567Z * [new branch] desertfire/test_cpp_wrapper -> origin/desertfire/test_cpp_wrapper 2025-08-14T21:24:05.8414010Z * [new branch] desertfire/triton-cpu-for-aarch64 -> origin/desertfire/triton-cpu-for-aarch64 2025-08-14T21:24:05.8414439Z * [new branch] dev/joona/MPSNDArrayAdd -> origin/dev/joona/MPSNDArrayAdd 2025-08-14T21:24:05.8414828Z * [new branch] dev/joona/Unranked -> origin/dev/joona/Unranked 2025-08-14T21:24:05.8415166Z * [new branch] dev/joona/cat -> origin/dev/joona/cat 2025-08-14T21:24:05.8415527Z * [new branch] dev/joona/cat_remove_graph -> origin/dev/joona/cat_remove_graph 2025-08-14T21:24:05.8415909Z * [new branch] dev/joona/embeddingbag -> origin/dev/joona/embeddingbag 2025-08-14T21:24:05.8416309Z * [new branch] dev/joona/getTensorsString -> origin/dev/joona/getTensorsString 2025-08-14T21:24:05.8416770Z * [new branch] dev/joona/maxpool2dwithindices_errmsg -> origin/dev/joona/maxpool2dwithindices_errmsg 2025-08-14T21:24:05.8417223Z * [new branch] dev/joona/mps_linear_macos14 -> origin/dev/joona/mps_linear_macos14 2025-08-14T21:24:05.8417595Z * [new branch] dev/joona/sdpa -> origin/dev/joona/sdpa 2025-08-14T21:24:05.8417982Z * [new branch] dev/joona/synchronize_benchmark -> origin/dev/joona/synchronize_benchmark 2025-08-14T21:24:05.8418373Z * [new branch] dev/joona/topk_newapi -> origin/dev/joona/topk_newapi 2025-08-14T21:24:05.8418729Z * [new branch] dev/joona/type_inf -> origin/dev/joona/type_inf 2025-08-14T21:24:05.8419069Z * [new branch] dev/joona/upsize3d -> origin/dev/joona/upsize3d 2025-08-14T21:24:05.8419392Z * [new branch] disable -> origin/disable 2025-08-14T21:24:05.8419992Z * [new branch] divyanshk-log-api-usage-datapipes-1 -> origin/divyanshk-log-api-usage-datapipes-1 2025-08-14T21:24:05.8420443Z * [new branch] e2e-baseline -> origin/e2e-baseline 2025-08-14T21:24:05.8420822Z * [new branch] embg/test_inductor_ci_128B -> origin/embg/test_inductor_ci_128B 2025-08-14T21:24:05.8421188Z * [new branch] embg/test_inductor_ci_base -> origin/embg/test_inductor_ci_base 2025-08-14T21:24:05.8421772Z * [new branch] embg/test_inductor_ci_control -> origin/embg/test_inductor_ci_control 2025-08-14T21:24:05.8422205Z * [new branch] embg/triton_l2_prefetch_128B -> origin/embg/triton_l2_prefetch_128B 2025-08-14T21:24:05.8422586Z * [new branch] embg/triton_l2_prefetch_256B -> origin/embg/triton_l2_prefetch_256B 2025-08-14T21:24:05.8422953Z * [new branch] enable-b200-benchmark -> origin/enable-b200-benchmark 2025-08-14T21:24:05.8423302Z * [new branch] eqy-patch-1 -> origin/eqy-patch-1 2025-08-14T21:24:05.8423644Z * [new branch] eqy-patch-10 -> origin/eqy-patch-10 2025-08-14T21:24:05.8423954Z * [new branch] eqy-patch-2 -> origin/eqy-patch-2 2025-08-14T21:24:05.8424302Z * [new branch] example-convert-torch.nn -> origin/example-convert-torch.nn 2025-08-14T21:24:05.8424732Z * [new branch] exclamaforte/amd-ma -> origin/exclamaforte/amd-ma 2025-08-14T21:24:05.8425172Z * [new branch] exclamaforte/bump-transformer-version -> origin/exclamaforte/bump-transformer-version 2025-08-14T21:24:05.8425677Z * [new branch] exclamaforte/combo-kernels-perf-run -> origin/exclamaforte/combo-kernels-perf-run 2025-08-14T21:24:05.8426163Z * [new branch] exclamaforte/debug-autotuner-profile -> origin/exclamaforte/debug-autotuner-profile 2025-08-14T21:24:05.8426619Z * [new branch] exclamaforte/do_bench_refactor -> origin/exclamaforte/do_bench_refactor 2025-08-14T21:24:05.8427061Z * [new branch] exclamaforte/enable-mem-dep-fusion -> origin/exclamaforte/enable-mem-dep-fusion 2025-08-14T21:24:05.8427554Z * [new branch] exclamaforte/fix-exhaustive-autotuning -> origin/exclamaforte/fix-exhaustive-autotuning 2025-08-14T21:24:05.8428051Z * [new branch] exclamaforte/fix-trace-parsing-fx-svg -> origin/exclamaforte/fix-trace-parsing-fx-svg 2025-08-14T21:24:05.8428585Z * [new branch] exclamaforte/force-pointwise-cat-perf-run -> origin/exclamaforte/force-pointwise-cat-perf-run 2025-08-14T21:24:05.8429053Z * [new branch] exclamaforte/fusion-data -> origin/exclamaforte/fusion-data 2025-08-14T21:24:05.8429476Z * [new branch] exclamaforte/gemm-benchmark-run -> origin/exclamaforte/gemm-benchmark-run 2025-08-14T21:24:05.8429946Z * [new branch] exclamaforte/gemm-export-model -> origin/exclamaforte/gemm-export-model 2025-08-14T21:24:05.8430385Z * [new branch] exclamaforte/gemm-model -> origin/exclamaforte/gemm-model 2025-08-14T21:24:05.8430862Z * [new branch] exclamaforte/gemm-model-all-data-collection -> origin/exclamaforte/gemm-model-all-data-collection 2025-08-14T21:24:05.8431340Z * [new branch] exclamaforte/gemm-to-amd -> origin/exclamaforte/gemm-to-amd 2025-08-14T21:24:05.8431729Z * [new branch] exclamaforte/just-gemm-model -> origin/exclamaforte/just-gemm-model 2025-08-14T21:24:05.8432207Z * [new branch] exclamaforte/just-gemm-model-no-refactor -> origin/exclamaforte/just-gemm-model-no-refactor 2025-08-14T21:24:05.8432676Z * [new branch] exclamaforte/memory-counter -> origin/exclamaforte/memory-counter 2025-08-14T21:24:05.8433149Z * [new branch] exclamaforte/scheduler-refactor -> origin/exclamaforte/scheduler-refactor 2025-08-14T21:24:05.8433603Z * [new branch] exclamaforte/test_cpp_wrapper_mode -> origin/exclamaforte/test_cpp_wrapper_mode 2025-08-14T21:24:05.8434087Z * [new branch] exclamaforte/update-autotune-configs -> origin/exclamaforte/update-autotune-configs 2025-08-14T21:24:05.8434591Z * [new branch] exclamaforte/update-autotune-configs-2 -> origin/exclamaforte/update-autotune-configs-2 2025-08-14T21:24:05.8435105Z * [new branch] exclamaforte/update-pandas-numpy-ci -> origin/exclamaforte/update-pandas-numpy-ci 2025-08-14T21:24:05.8435607Z * [new branch] exclamforte/gemm-model-final -> origin/exclamforte/gemm-model-final 2025-08-14T21:24:05.8435966Z * [new branch] exec -> origin/exec 2025-08-14T21:24:05.8436298Z * [new branch] experimental-mosaic -> origin/experimental-mosaic 2025-08-14T21:24:05.8436657Z * [new branch] export-D58091437 -> origin/export-D58091437 2025-08-14T21:24:05.8437306Z * [new branch] export-D61047529 -> origin/export-D61047529 2025-08-14T21:24:05.8437637Z * [new branch] export-D68846308 -> origin/export-D68846308 2025-08-14T21:24:05.8437962Z * [new branch] export-D70112642 -> origin/export-D70112642 2025-08-14T21:24:05.8438287Z * [new branch] export-D71412006 -> origin/export-D71412006 2025-08-14T21:24:05.8438600Z * [new branch] export-D72483950 -> origin/export-D72483950 2025-08-14T21:24:05.8438975Z * [new branch] export-D73042989 -> origin/export-D73042989 2025-08-14T21:24:05.8441150Z * [new branch] export-D73287751 -> origin/export-D73287751 2025-08-14T21:24:05.8441482Z * [new branch] export-D75183591 -> origin/export-D75183591 2025-08-14T21:24:05.8442044Z * [new branch] export-D75605373 -> origin/export-D75605373 2025-08-14T21:24:05.8442379Z * [new branch] export-D75617432 -> origin/export-D75617432 2025-08-14T21:24:05.8442700Z * [new branch] export-D75659965 -> origin/export-D75659965 2025-08-14T21:24:05.8447390Z * [new branch] export-D76080931 -> origin/export-D76080931 2025-08-14T21:24:05.8451091Z * [new branch] export-D76463347 -> origin/export-D76463347 2025-08-14T21:24:05.8453012Z * [new branch] export-D76797250 -> origin/export-D76797250 2025-08-14T21:24:05.8453388Z * [new branch] export-D76885271 -> origin/export-D76885271 2025-08-14T21:24:05.8453737Z * [new branch] export-D76885620 -> origin/export-D76885620 2025-08-14T21:24:05.8454054Z * [new branch] export-D76936623 -> origin/export-D76936623 2025-08-14T21:24:05.8454382Z * [new branch] export-D76958268 -> origin/export-D76958268 2025-08-14T21:24:05.8454701Z * [new branch] export-D78047846 -> origin/export-D78047846 2025-08-14T21:24:05.8455010Z * [new branch] export-D78308105 -> origin/export-D78308105 2025-08-14T21:24:05.8455335Z * [new branch] export-D78363609 -> origin/export-D78363609 2025-08-14T21:24:05.8455652Z * [new branch] export-D78375400 -> origin/export-D78375400 2025-08-14T21:24:05.8455969Z * [new branch] export-D78431075 -> origin/export-D78431075 2025-08-14T21:24:05.8456284Z * [new branch] export-D78431305 -> origin/export-D78431305 2025-08-14T21:24:05.8456608Z * [new branch] export-D78458745 -> origin/export-D78458745 2025-08-14T21:24:05.8456926Z * [new branch] export-D78524147 -> origin/export-D78524147 2025-08-14T21:24:05.8457244Z * [new branch] export-D78580107 -> origin/export-D78580107 2025-08-14T21:24:05.8457556Z * [new branch] export-D78588406 -> origin/export-D78588406 2025-08-14T21:24:05.8457885Z * [new branch] export-D78691422 -> origin/export-D78691422 2025-08-14T21:24:05.8458210Z * [new branch] export-D78758466 -> origin/export-D78758466 2025-08-14T21:24:05.8458531Z * [new branch] export-D78822171 -> origin/export-D78822171 2025-08-14T21:24:05.8458842Z * [new branch] export-D78822351 -> origin/export-D78822351 2025-08-14T21:24:05.8459170Z * [new branch] export-D78822507 -> origin/export-D78822507 2025-08-14T21:24:05.8459866Z * [new branch] export-D78826994 -> origin/export-D78826994 2025-08-14T21:24:05.8460231Z * [new branch] export-D78894142 -> origin/export-D78894142 2025-08-14T21:24:05.8460568Z * [new branch] export-D78894324 -> origin/export-D78894324 2025-08-14T21:24:05.8460909Z * [new branch] export-D78907485 -> origin/export-D78907485 2025-08-14T21:24:05.8461436Z * [new branch] export-D78929245 -> origin/export-D78929245 2025-08-14T21:24:05.8461900Z * [new branch] export-D78934925 -> origin/export-D78934925 2025-08-14T21:24:05.8462522Z * [new branch] export-D78953203 -> origin/export-D78953203 2025-08-14T21:24:05.8463167Z * [new branch] export-D78953229 -> origin/export-D78953229 2025-08-14T21:24:05.8463929Z * [new branch] export-D78957093 -> origin/export-D78957093 2025-08-14T21:24:05.8464536Z * [new branch] export-D78957389 -> origin/export-D78957389 2025-08-14T21:24:05.8468300Z * [new branch] export-D78957974 -> origin/export-D78957974 2025-08-14T21:24:05.8468875Z * [new branch] export-D78979812 -> origin/export-D78979812 2025-08-14T21:24:05.8469405Z * [new branch] export-D78996107 -> origin/export-D78996107 2025-08-14T21:24:05.8470270Z * [new branch] export-D79026433 -> origin/export-D79026433 2025-08-14T21:24:05.8470683Z * [new branch] export-D79230339 -> origin/export-D79230339 2025-08-14T21:24:05.8471037Z * [new branch] export-D79319835 -> origin/export-D79319835 2025-08-14T21:24:05.8471375Z * [new branch] export-D79328456 -> origin/export-D79328456 2025-08-14T21:24:05.8471734Z * [new branch] export-D79534608 -> origin/export-D79534608 2025-08-14T21:24:05.8476985Z * [new branch] export-D79647167 -> origin/export-D79647167 2025-08-14T21:24:05.8481917Z * [new branch] export-D79751098 -> origin/export-D79751098 2025-08-14T21:24:05.8486336Z * [new branch] export-D79785974 -> origin/export-D79785974 2025-08-14T21:24:05.8491395Z * [new branch] export-D80025417 -> origin/export-D80025417 2025-08-14T21:24:05.8493480Z * [new branch] export-D80120333 -> origin/export-D80120333 2025-08-14T21:24:05.8493863Z * [new branch] export-D80214882 -> origin/export-D80214882 2025-08-14T21:24:05.8494304Z * [new branch] exported-model-train-idempotent -> origin/exported-model-train-idempotent 2025-08-14T21:24:05.8494774Z * [new branch] ezyang/wip-aot-descriptors -> origin/ezyang/wip-aot-descriptors 2025-08-14T21:24:05.8495169Z * [new branch] fa_u8_brgemm -> origin/fa_u8_brgemm 2025-08-14T21:24:05.8495509Z * [new branch] fastmath_baseline -> origin/fastmath_baseline 2025-08-14T21:24:05.8495841Z * [new branch] fbcode/warm -> origin/fbcode/warm 2025-08-14T21:24:05.8496134Z * [new branch] fca -> origin/fca 2025-08-14T21:24:05.8496431Z * [new branch] fca2_ca5984c -> origin/fca2_ca5984c 2025-08-14T21:24:05.8496728Z * [new branch] fca5 -> origin/fca5 2025-08-14T21:24:05.8497100Z * [new branch] feature/function-numa-binding -> origin/feature/function-numa-binding 2025-08-14T21:24:05.8497546Z * [new branch] fengyuan/external-proj -> origin/fengyuan/external-proj 2025-08-14T21:24:05.8498023Z * [new branch] fengyuan/out-of-tree-xpu-ops-improve-test -> origin/fengyuan/out-of-tree-xpu-ops-improve-test 2025-08-14T21:24:05.8498758Z * [new branch] fengyuan/out-of-tree-xpu-ops-remove-dtype -> origin/fengyuan/out-of-tree-xpu-ops-remove-dtype 2025-08-14T21:24:05.8499257Z * [new branch] fengyuan/test-xpu -> origin/fengyuan/test-xpu 2025-08-14T21:24:05.8499615Z * [new branch] ffast_math_baseline -> origin/ffast_math_baseline 2025-08-14T21:24:05.8500143Z * [new branch] ffast_math_target -> origin/ffast_math_target 2025-08-14T21:24:05.8500485Z * [new branch] findhao/base_commit -> origin/findhao/base_commit 2025-08-14T21:24:05.8500856Z * [new branch] findhao/base_commit1 -> origin/findhao/base_commit1 2025-08-14T21:24:05.8501262Z * [new branch] findhao/fix-indirect-access -> origin/findhao/fix-indirect-access 2025-08-14T21:24:05.8501664Z * [new branch] findhao/multistream2 -> origin/findhao/multistream2 2025-08-14T21:24:05.8502091Z * [new branch] findhao/multistream5 -> origin/findhao/multistream5 2025-08-14T21:24:05.8502445Z * [new branch] findhao/multistream6 -> origin/findhao/multistream6 2025-08-14T21:24:05.8502810Z * [new branch] findhao/operatorbench3 -> origin/findhao/operatorbench3 2025-08-14T21:24:05.8503194Z * [new branch] findhao/operatorbench5 -> origin/findhao/operatorbench5 2025-08-14T21:24:05.8503550Z * [new branch] findhao/tritonparse -> origin/findhao/tritonparse 2025-08-14T21:24:05.8503873Z * [new branch] fix -> origin/fix 2025-08-14T21:24:05.8504227Z * [new branch] fix-ck-gemm-template-format -> origin/fix-ck-gemm-template-format 2025-08-14T21:24:05.8504598Z * [new branch] fix-config-ignore -> origin/fix-config-ignore 2025-08-14T21:24:05.8504930Z * [new branch] fix-dict-guard -> origin/fix-dict-guard 2025-08-14T21:24:05.8505293Z * [new branch] fix-distributed-warning -> origin/fix-distributed-warning 2025-08-14T21:24:05.8505695Z * [new branch] fix-inductor-periodic-0528 -> origin/fix-inductor-periodic-0528 2025-08-14T21:24:05.8506095Z * [new branch] fix-rlease-feature-template -> origin/fix-rlease-feature-template 2025-08-14T21:24:05.8506458Z * [new branch] fix_153389 -> origin/fix_153389 2025-08-14T21:24:05.8506770Z * [new branch] fixes-triage -> origin/fixes-triage 2025-08-14T21:24:05.8507099Z * [new branch] flash_decoding_cpu -> origin/flash_decoding_cpu 2025-08-14T21:24:05.8507415Z * [new branch] flex-flash -> origin/flex-flash 2025-08-14T21:24:05.8507774Z * [new branch] flex-lowering -> origin/flex-lowering 2025-08-14T21:24:05.8508090Z * [new branch] flex-warning -> origin/flex-warning 2025-08-14T21:24:05.8508532Z * [new branch] flex_attention_functorch_grad -> origin/flex_attention_functorch_grad 2025-08-14T21:24:05.8508892Z * [new branch] flex_flash -> origin/flex_flash 2025-08-14T21:24:05.8509256Z * [new branch] fmassa/fix_memeff_sharding_rule -> origin/fmassa/fix_memeff_sharding_rule 2025-08-14T21:24:05.8509690Z * [new branch] fmassa/try_fix_ac_tag_propagation -> origin/fmassa/try_fix_ac_tag_propagation 2025-08-14T21:24:05.8514169Z * [new branch] fsdp2_trace_rules -> origin/fsdp2_trace_rules 2025-08-14T21:24:05.8514501Z * [new branch] fsdpv2_3d -> origin/fsdpv2_3d 2025-08-14T21:24:05.8514809Z * [new branch] fsdpv2_3d_m1 -> origin/fsdpv2_3d_m1 2025-08-14T21:24:05.8515113Z * [new branch] fx_cpp -> origin/fx_cpp 2025-08-14T21:24:05.8515414Z * [new branch] fy/fix-win -> origin/fy/fix-win 2025-08-14T21:24:05.8515753Z * [new branch] gh/AlnisM/1/base -> origin/gh/AlnisM/1/base 2025-08-14T21:24:05.8516187Z * [new branch] gh/AlnisM/1/head -> origin/gh/AlnisM/1/head 2025-08-14T21:24:05.8519716Z * [new branch] gh/CaoE/2/base -> origin/gh/CaoE/2/base 2025-08-14T21:24:05.8520059Z * [new branch] gh/CaoE/2/head -> origin/gh/CaoE/2/head 2025-08-14T21:24:05.8520374Z * [new branch] gh/CaoE/2/orig -> origin/gh/CaoE/2/orig 2025-08-14T21:24:05.8520733Z * [new branch] gh/ColinPeppler/72/base -> origin/gh/ColinPeppler/72/base 2025-08-14T21:24:05.8521117Z * [new branch] gh/ColinPeppler/72/head -> origin/gh/ColinPeppler/72/head 2025-08-14T21:24:05.8521496Z * [new branch] gh/ColinPeppler/72/orig -> origin/gh/ColinPeppler/72/orig 2025-08-14T21:24:05.8525254Z * [new branch] gh/ColinPeppler/77/base -> origin/gh/ColinPeppler/77/base 2025-08-14T21:24:05.8525693Z * [new branch] gh/ColinPeppler/77/head -> origin/gh/ColinPeppler/77/head 2025-08-14T21:24:05.8526062Z * [new branch] gh/ColinPeppler/77/orig -> origin/gh/ColinPeppler/77/orig 2025-08-14T21:24:05.8526435Z * [new branch] gh/ColinPeppler/78/base -> origin/gh/ColinPeppler/78/base 2025-08-14T21:24:05.8526799Z * [new branch] gh/ColinPeppler/78/head -> origin/gh/ColinPeppler/78/head 2025-08-14T21:24:05.8527165Z * [new branch] gh/ColinPeppler/78/orig -> origin/gh/ColinPeppler/78/orig 2025-08-14T21:24:05.8529450Z * [new branch] gh/EikanWang/67/base -> origin/gh/EikanWang/67/base 2025-08-14T21:24:05.8529814Z * [new branch] gh/EikanWang/67/head -> origin/gh/EikanWang/67/head 2025-08-14T21:24:05.8530159Z * [new branch] gh/EikanWang/80/base -> origin/gh/EikanWang/80/base 2025-08-14T21:24:05.8530512Z * [new branch] gh/EikanWang/80/head -> origin/gh/EikanWang/80/head 2025-08-14T21:24:05.8530871Z * [new branch] gh/EikanWang/80/orig -> origin/gh/EikanWang/80/orig 2025-08-14T21:24:05.8531206Z * [new branch] gh/EikanWang/81/base -> origin/gh/EikanWang/81/base 2025-08-14T21:24:05.8533745Z * [new branch] gh/EikanWang/81/head -> origin/gh/EikanWang/81/head 2025-08-14T21:24:05.8534100Z * [new branch] gh/EikanWang/81/orig -> origin/gh/EikanWang/81/orig 2025-08-14T21:24:05.8534456Z * [new branch] gh/Gasoonjia/1/base -> origin/gh/Gasoonjia/1/base 2025-08-14T21:24:05.8534798Z * [new branch] gh/Gasoonjia/1/head -> origin/gh/Gasoonjia/1/head 2025-08-14T21:24:05.8535151Z * [new branch] gh/H-Huang/131/base -> origin/gh/H-Huang/131/base 2025-08-14T21:24:05.8535491Z * [new branch] gh/H-Huang/131/head -> origin/gh/H-Huang/131/head 2025-08-14T21:24:05.8537237Z * [new branch] gh/H-Huang/131/orig -> origin/gh/H-Huang/131/orig 2025-08-14T21:24:05.8537574Z * [new branch] gh/H-Huang/132/base -> origin/gh/H-Huang/132/base 2025-08-14T21:24:05.8537906Z * [new branch] gh/H-Huang/132/head -> origin/gh/H-Huang/132/head 2025-08-14T21:24:05.8538239Z * [new branch] gh/H-Huang/132/orig -> origin/gh/H-Huang/132/orig 2025-08-14T21:24:05.8540972Z * [new branch] gh/H-Huang/180/base -> origin/gh/H-Huang/180/base 2025-08-14T21:24:05.8541315Z * [new branch] gh/H-Huang/180/head -> origin/gh/H-Huang/180/head 2025-08-14T21:24:05.8541669Z * [new branch] gh/H-Huang/180/orig -> origin/gh/H-Huang/180/orig 2025-08-14T21:24:05.8542167Z * [new branch] gh/H-Huang/182/base -> origin/gh/H-Huang/182/base 2025-08-14T21:24:05.8542497Z * [new branch] gh/H-Huang/182/head -> origin/gh/H-Huang/182/head 2025-08-14T21:24:05.8547270Z * [new branch] gh/H-Huang/182/orig -> origin/gh/H-Huang/182/orig 2025-08-14T21:24:05.8552474Z * [new branch] gh/H-Huang/183/base -> origin/gh/H-Huang/183/base 2025-08-14T21:24:05.8556786Z * [new branch] gh/H-Huang/183/head -> origin/gh/H-Huang/183/head 2025-08-14T21:24:05.8561755Z * [new branch] gh/H-Huang/183/orig -> origin/gh/H-Huang/183/orig 2025-08-14T21:24:05.8564024Z * [new branch] gh/H-Huang/187/base -> origin/gh/H-Huang/187/base 2025-08-14T21:24:05.8569249Z * [new branch] gh/H-Huang/187/head -> origin/gh/H-Huang/187/head 2025-08-14T21:24:05.8573814Z * [new branch] gh/H-Huang/187/orig -> origin/gh/H-Huang/187/orig 2025-08-14T21:24:05.8574175Z * [new branch] gh/H-Huang/192/base -> origin/gh/H-Huang/192/base 2025-08-14T21:24:05.8574487Z * [new branch] gh/H-Huang/192/head -> origin/gh/H-Huang/192/head 2025-08-14T21:24:05.8575069Z * [new branch] gh/H-Huang/192/orig -> origin/gh/H-Huang/192/orig 2025-08-14T21:24:05.8575406Z * [new branch] gh/H-Huang/195/base -> origin/gh/H-Huang/195/base 2025-08-14T21:24:05.8575736Z * [new branch] gh/H-Huang/195/head -> origin/gh/H-Huang/195/head 2025-08-14T21:24:05.8576064Z * [new branch] gh/H-Huang/195/orig -> origin/gh/H-Huang/195/orig 2025-08-14T21:24:05.8576398Z * [new branch] gh/H-Huang/196/base -> origin/gh/H-Huang/196/base 2025-08-14T21:24:05.8576734Z * [new branch] gh/H-Huang/196/head -> origin/gh/H-Huang/196/head 2025-08-14T21:24:05.8577062Z * [new branch] gh/H-Huang/196/orig -> origin/gh/H-Huang/196/orig 2025-08-14T21:24:05.8577401Z * [new branch] gh/H-Huang/197/base -> origin/gh/H-Huang/197/base 2025-08-14T21:24:05.8577734Z * [new branch] gh/H-Huang/197/head -> origin/gh/H-Huang/197/head 2025-08-14T21:24:05.8578073Z * [new branch] gh/H-Huang/197/orig -> origin/gh/H-Huang/197/orig 2025-08-14T21:24:05.8578400Z * [new branch] gh/H-Huang/198/base -> origin/gh/H-Huang/198/base 2025-08-14T21:24:05.8578739Z * [new branch] gh/H-Huang/198/head -> origin/gh/H-Huang/198/head 2025-08-14T21:24:05.8579077Z * [new branch] gh/H-Huang/198/orig -> origin/gh/H-Huang/198/orig 2025-08-14T21:24:05.8579412Z * [new branch] gh/H-Huang/199/base -> origin/gh/H-Huang/199/base 2025-08-14T21:24:05.8579909Z * [new branch] gh/H-Huang/199/head -> origin/gh/H-Huang/199/head 2025-08-14T21:24:05.8580261Z * [new branch] gh/H-Huang/199/orig -> origin/gh/H-Huang/199/orig 2025-08-14T21:24:05.8580601Z * [new branch] gh/H-Huang/200/base -> origin/gh/H-Huang/200/base 2025-08-14T21:24:05.8580932Z * [new branch] gh/H-Huang/200/head -> origin/gh/H-Huang/200/head 2025-08-14T21:24:05.8581265Z * [new branch] gh/H-Huang/200/orig -> origin/gh/H-Huang/200/orig 2025-08-14T21:24:05.8581576Z * [new branch] gh/H-Huang/201/base -> origin/gh/H-Huang/201/base 2025-08-14T21:24:05.8581891Z * [new branch] gh/H-Huang/201/head -> origin/gh/H-Huang/201/head 2025-08-14T21:24:05.8582194Z * [new branch] gh/H-Huang/201/orig -> origin/gh/H-Huang/201/orig 2025-08-14T21:24:05.8582504Z * [new branch] gh/H-Huang/202/base -> origin/gh/H-Huang/202/base 2025-08-14T21:24:05.8582815Z * [new branch] gh/H-Huang/202/head -> origin/gh/H-Huang/202/head 2025-08-14T21:24:05.8583126Z * [new branch] gh/H-Huang/202/orig -> origin/gh/H-Huang/202/orig 2025-08-14T21:24:05.8583428Z * [new branch] gh/H-Huang/203/base -> origin/gh/H-Huang/203/base 2025-08-14T21:24:05.8583739Z * [new branch] gh/H-Huang/203/head -> origin/gh/H-Huang/203/head 2025-08-14T21:24:05.8584111Z * [new branch] gh/H-Huang/203/orig -> origin/gh/H-Huang/203/orig 2025-08-14T21:24:05.8584426Z * [new branch] gh/H-Huang/204/base -> origin/gh/H-Huang/204/base 2025-08-14T21:24:05.8584732Z * [new branch] gh/H-Huang/204/head -> origin/gh/H-Huang/204/head 2025-08-14T21:24:05.8585047Z * [new branch] gh/H-Huang/204/orig -> origin/gh/H-Huang/204/orig 2025-08-14T21:24:05.8585359Z * [new branch] gh/H-Huang/205/base -> origin/gh/H-Huang/205/base 2025-08-14T21:24:05.8585663Z * [new branch] gh/H-Huang/205/head -> origin/gh/H-Huang/205/head 2025-08-14T21:24:05.8585977Z * [new branch] gh/H-Huang/205/orig -> origin/gh/H-Huang/205/orig 2025-08-14T21:24:05.8586288Z * [new branch] gh/H-Huang/206/base -> origin/gh/H-Huang/206/base 2025-08-14T21:24:05.8586675Z * [new branch] gh/H-Huang/206/head -> origin/gh/H-Huang/206/head 2025-08-14T21:24:05.8586998Z * [new branch] gh/H-Huang/206/orig -> origin/gh/H-Huang/206/orig 2025-08-14T21:24:05.8587309Z * [new branch] gh/H-Huang/207/base -> origin/gh/H-Huang/207/base 2025-08-14T21:24:05.8587618Z * [new branch] gh/H-Huang/207/head -> origin/gh/H-Huang/207/head 2025-08-14T21:24:05.8587931Z * [new branch] gh/H-Huang/207/orig -> origin/gh/H-Huang/207/orig 2025-08-14T21:24:05.8588235Z * [new branch] gh/H-Huang/208/base -> origin/gh/H-Huang/208/base 2025-08-14T21:24:05.8588552Z * [new branch] gh/H-Huang/208/head -> origin/gh/H-Huang/208/head 2025-08-14T21:24:05.8588870Z * [new branch] gh/H-Huang/208/orig -> origin/gh/H-Huang/208/orig 2025-08-14T21:24:05.8589180Z * [new branch] gh/H-Huang/209/base -> origin/gh/H-Huang/209/base 2025-08-14T21:24:05.8589489Z * [new branch] gh/H-Huang/209/head -> origin/gh/H-Huang/209/head 2025-08-14T21:24:05.8589803Z * [new branch] gh/H-Huang/209/orig -> origin/gh/H-Huang/209/orig 2025-08-14T21:24:05.8590163Z * [new branch] gh/IvanKobzarev/107/base -> origin/gh/IvanKobzarev/107/base 2025-08-14T21:24:05.8590540Z * [new branch] gh/IvanKobzarev/107/head -> origin/gh/IvanKobzarev/107/head 2025-08-14T21:24:05.8590911Z * [new branch] gh/IvanKobzarev/107/orig -> origin/gh/IvanKobzarev/107/orig 2025-08-14T21:24:05.8591438Z * [new branch] gh/IvanKobzarev/110/base -> origin/gh/IvanKobzarev/110/base 2025-08-14T21:24:05.8591884Z * [new branch] gh/IvanKobzarev/110/head -> origin/gh/IvanKobzarev/110/head 2025-08-14T21:24:05.8595506Z * [new branch] gh/IvanKobzarev/110/orig -> origin/gh/IvanKobzarev/110/orig 2025-08-14T21:24:05.8595941Z * [new branch] gh/IvanKobzarev/111/base -> origin/gh/IvanKobzarev/111/base 2025-08-14T21:24:05.8596365Z * [new branch] gh/IvanKobzarev/111/head -> origin/gh/IvanKobzarev/111/head 2025-08-14T21:24:05.8596710Z * [new branch] gh/IvanKobzarev/111/orig -> origin/gh/IvanKobzarev/111/orig 2025-08-14T21:24:05.8597065Z * [new branch] gh/IvanKobzarev/112/base -> origin/gh/IvanKobzarev/112/base 2025-08-14T21:24:05.8597416Z * [new branch] gh/IvanKobzarev/112/head -> origin/gh/IvanKobzarev/112/head 2025-08-14T21:24:05.8597761Z * [new branch] gh/IvanKobzarev/112/orig -> origin/gh/IvanKobzarev/112/orig 2025-08-14T21:24:05.8598114Z * [new branch] gh/IvanKobzarev/115/base -> origin/gh/IvanKobzarev/115/base 2025-08-14T21:24:05.8598464Z * [new branch] gh/IvanKobzarev/115/head -> origin/gh/IvanKobzarev/115/head 2025-08-14T21:24:05.8598819Z * [new branch] gh/IvanKobzarev/115/orig -> origin/gh/IvanKobzarev/115/orig 2025-08-14T21:24:05.8599171Z * [new branch] gh/IvanKobzarev/116/base -> origin/gh/IvanKobzarev/116/base 2025-08-14T21:24:05.8599685Z * [new branch] gh/IvanKobzarev/116/head -> origin/gh/IvanKobzarev/116/head 2025-08-14T21:24:05.8600051Z * [new branch] gh/IvanKobzarev/116/orig -> origin/gh/IvanKobzarev/116/orig 2025-08-14T21:24:05.8600411Z * [new branch] gh/IvanKobzarev/118/base -> origin/gh/IvanKobzarev/118/base 2025-08-14T21:24:05.8600762Z * [new branch] gh/IvanKobzarev/118/head -> origin/gh/IvanKobzarev/118/head 2025-08-14T21:24:05.8601123Z * [new branch] gh/IvanKobzarev/118/orig -> origin/gh/IvanKobzarev/118/orig 2025-08-14T21:24:05.8602820Z * [new branch] gh/IvanKobzarev/124/base -> origin/gh/IvanKobzarev/124/base 2025-08-14T21:24:05.8603727Z * [new branch] gh/IvanKobzarev/124/head -> origin/gh/IvanKobzarev/124/head 2025-08-14T21:24:05.8607833Z * [new branch] gh/IvanKobzarev/124/orig -> origin/gh/IvanKobzarev/124/orig 2025-08-14T21:24:05.8608480Z * [new branch] gh/IvanKobzarev/126/base -> origin/gh/IvanKobzarev/126/base 2025-08-14T21:24:05.8608877Z * [new branch] gh/IvanKobzarev/126/head -> origin/gh/IvanKobzarev/126/head 2025-08-14T21:24:05.8609265Z * [new branch] gh/IvanKobzarev/126/orig -> origin/gh/IvanKobzarev/126/orig 2025-08-14T21:24:05.8609649Z * [new branch] gh/IvanKobzarev/127/base -> origin/gh/IvanKobzarev/127/base 2025-08-14T21:24:05.8610033Z * [new branch] gh/IvanKobzarev/127/head -> origin/gh/IvanKobzarev/127/head 2025-08-14T21:24:05.8610417Z * [new branch] gh/IvanKobzarev/127/orig -> origin/gh/IvanKobzarev/127/orig 2025-08-14T21:24:05.8610816Z * [new branch] gh/IvanKobzarev/128/base -> origin/gh/IvanKobzarev/128/base 2025-08-14T21:24:05.8611203Z * [new branch] gh/IvanKobzarev/128/head -> origin/gh/IvanKobzarev/128/head 2025-08-14T21:24:05.8611599Z * [new branch] gh/IvanKobzarev/128/orig -> origin/gh/IvanKobzarev/128/orig 2025-08-14T21:24:05.8611984Z * [new branch] gh/IvanKobzarev/129/base -> origin/gh/IvanKobzarev/129/base 2025-08-14T21:24:05.8612376Z * [new branch] gh/IvanKobzarev/129/head -> origin/gh/IvanKobzarev/129/head 2025-08-14T21:24:05.8612753Z * [new branch] gh/IvanKobzarev/129/orig -> origin/gh/IvanKobzarev/129/orig 2025-08-14T21:24:05.8613134Z * [new branch] gh/IvanKobzarev/130/base -> origin/gh/IvanKobzarev/130/base 2025-08-14T21:24:05.8613501Z * [new branch] gh/IvanKobzarev/130/head -> origin/gh/IvanKobzarev/130/head 2025-08-14T21:24:05.8614082Z * [new branch] gh/IvanKobzarev/130/orig -> origin/gh/IvanKobzarev/130/orig 2025-08-14T21:24:05.8614803Z * [new branch] gh/IvanKobzarev/131/base -> origin/gh/IvanKobzarev/131/base 2025-08-14T21:24:05.8615551Z * [new branch] gh/IvanKobzarev/131/head -> origin/gh/IvanKobzarev/131/head 2025-08-14T21:24:05.8616139Z * [new branch] gh/IvanKobzarev/131/orig -> origin/gh/IvanKobzarev/131/orig 2025-08-14T21:24:05.8617961Z * [new branch] gh/IvanKobzarev/132/base -> origin/gh/IvanKobzarev/132/base 2025-08-14T21:24:05.8618409Z * [new branch] gh/IvanKobzarev/132/head -> origin/gh/IvanKobzarev/132/head 2025-08-14T21:24:05.8619095Z * [new branch] gh/IvanKobzarev/132/orig -> origin/gh/IvanKobzarev/132/orig 2025-08-14T21:24:05.8620626Z * [new branch] gh/IvanKobzarev/133/base -> origin/gh/IvanKobzarev/133/base 2025-08-14T21:24:05.8621114Z * [new branch] gh/IvanKobzarev/133/head -> origin/gh/IvanKobzarev/133/head 2025-08-14T21:24:05.8621833Z * [new branch] gh/IvanKobzarev/133/orig -> origin/gh/IvanKobzarev/133/orig 2025-08-14T21:24:05.8624165Z * [new branch] gh/IvanKobzarev/134/base -> origin/gh/IvanKobzarev/134/base 2025-08-14T21:24:05.8629801Z * [new branch] gh/IvanKobzarev/134/head -> origin/gh/IvanKobzarev/134/head 2025-08-14T21:24:05.8635161Z * [new branch] gh/IvanKobzarev/134/orig -> origin/gh/IvanKobzarev/134/orig 2025-08-14T21:24:05.8639568Z * [new branch] gh/IvanKobzarev/135/base -> origin/gh/IvanKobzarev/135/base 2025-08-14T21:24:05.8644616Z * [new branch] gh/IvanKobzarev/135/head -> origin/gh/IvanKobzarev/135/head 2025-08-14T21:24:05.8649511Z * [new branch] gh/IvanKobzarev/135/orig -> origin/gh/IvanKobzarev/135/orig 2025-08-14T21:24:05.8652138Z * [new branch] gh/NikhilAPatel/1/base -> origin/gh/NikhilAPatel/1/base 2025-08-14T21:24:05.8652994Z * [new branch] gh/NikhilAPatel/1/head -> origin/gh/NikhilAPatel/1/head 2025-08-14T21:24:05.8653465Z * [new branch] gh/NikhilAPatel/16/base -> origin/gh/NikhilAPatel/16/base 2025-08-14T21:24:05.8653856Z * [new branch] gh/NikhilAPatel/16/head -> origin/gh/NikhilAPatel/16/head 2025-08-14T21:24:05.8654490Z * [new branch] gh/NikhilAPatel/16/orig -> origin/gh/NikhilAPatel/16/orig 2025-08-14T21:24:05.8654918Z * [new branch] gh/NikhilAPatel/18/base -> origin/gh/NikhilAPatel/18/base 2025-08-14T21:24:05.8655299Z * [new branch] gh/NikhilAPatel/18/head -> origin/gh/NikhilAPatel/18/head 2025-08-14T21:24:05.8655671Z * [new branch] gh/NikhilAPatel/18/orig -> origin/gh/NikhilAPatel/18/orig 2025-08-14T21:24:05.8656055Z * [new branch] gh/NikhilAPatel/19/base -> origin/gh/NikhilAPatel/19/base 2025-08-14T21:24:05.8656436Z * [new branch] gh/NikhilAPatel/19/head -> origin/gh/NikhilAPatel/19/head 2025-08-14T21:24:05.8656809Z * [new branch] gh/NikhilAPatel/19/orig -> origin/gh/NikhilAPatel/19/orig 2025-08-14T21:24:05.8657198Z * [new branch] gh/NikhilAPatel/2/base -> origin/gh/NikhilAPatel/2/base 2025-08-14T21:24:05.8657628Z * [new branch] gh/NikhilAPatel/2/head -> origin/gh/NikhilAPatel/2/head 2025-08-14T21:24:05.8658019Z * [new branch] gh/NikhilAPatel/4/base -> origin/gh/NikhilAPatel/4/base 2025-08-14T21:24:05.8658387Z * [new branch] gh/NikhilAPatel/4/head -> origin/gh/NikhilAPatel/4/head 2025-08-14T21:24:05.8658765Z * [new branch] gh/NikhilAPatel/8/base -> origin/gh/NikhilAPatel/8/base 2025-08-14T21:24:05.8659135Z * [new branch] gh/NikhilAPatel/8/head -> origin/gh/NikhilAPatel/8/head 2025-08-14T21:24:05.8659509Z * [new branch] gh/NikhilAPatel/8/orig -> origin/gh/NikhilAPatel/8/orig 2025-08-14T21:24:05.8660188Z * [new branch] gh/NikhilAPatel/9/base -> origin/gh/NikhilAPatel/9/base 2025-08-14T21:24:05.8660579Z * [new branch] gh/NikhilAPatel/9/head -> origin/gh/NikhilAPatel/9/head 2025-08-14T21:24:05.8660977Z * [new branch] gh/NikhilAPatel/9/orig -> origin/gh/NikhilAPatel/9/orig 2025-08-14T21:24:05.8661370Z * [new branch] gh/PaliC/1/base -> origin/gh/PaliC/1/base 2025-08-14T21:24:05.8661723Z * [new branch] gh/PaliC/1/head -> origin/gh/PaliC/1/head 2025-08-14T21:24:05.8662055Z * [new branch] gh/PaliC/1/orig -> origin/gh/PaliC/1/orig 2025-08-14T21:24:05.8662408Z * [new branch] gh/PaliC/12/base -> origin/gh/PaliC/12/base 2025-08-14T21:24:05.8662744Z * [new branch] gh/PaliC/12/head -> origin/gh/PaliC/12/head 2025-08-14T21:24:05.8663079Z * [new branch] gh/PaliC/12/orig -> origin/gh/PaliC/12/orig 2025-08-14T21:24:05.8663409Z * [new branch] gh/PaliC/13/base -> origin/gh/PaliC/13/base 2025-08-14T21:24:05.8663752Z * [new branch] gh/PaliC/13/head -> origin/gh/PaliC/13/head 2025-08-14T21:24:05.8664083Z * [new branch] gh/PaliC/13/orig -> origin/gh/PaliC/13/orig 2025-08-14T21:24:05.8664550Z * [new branch] gh/PaliC/14/base -> origin/gh/PaliC/14/base 2025-08-14T21:24:05.8664865Z * [new branch] gh/PaliC/14/head -> origin/gh/PaliC/14/head 2025-08-14T21:24:05.8665169Z * [new branch] gh/PaliC/14/orig -> origin/gh/PaliC/14/orig 2025-08-14T21:24:05.8665467Z * [new branch] gh/PaliC/15/base -> origin/gh/PaliC/15/base 2025-08-14T21:24:05.8665775Z * [new branch] gh/PaliC/15/head -> origin/gh/PaliC/15/head 2025-08-14T21:24:05.8666081Z * [new branch] gh/PaliC/15/orig -> origin/gh/PaliC/15/orig 2025-08-14T21:24:05.8666388Z * [new branch] gh/PaliC/16/base -> origin/gh/PaliC/16/base 2025-08-14T21:24:05.8666688Z * [new branch] gh/PaliC/16/head -> origin/gh/PaliC/16/head 2025-08-14T21:24:05.8666996Z * [new branch] gh/PaliC/16/orig -> origin/gh/PaliC/16/orig 2025-08-14T21:24:05.8667336Z * [new branch] gh/PaliC/17/base -> origin/gh/PaliC/17/base 2025-08-14T21:24:05.8667629Z * [new branch] gh/PaliC/17/head -> origin/gh/PaliC/17/head 2025-08-14T21:24:05.8667913Z * [new branch] gh/PaliC/17/orig -> origin/gh/PaliC/17/orig 2025-08-14T21:24:05.8668207Z * [new branch] gh/PaliC/18/base -> origin/gh/PaliC/18/base 2025-08-14T21:24:05.8668505Z * [new branch] gh/PaliC/18/head -> origin/gh/PaliC/18/head 2025-08-14T21:24:05.8668802Z * [new branch] gh/PaliC/18/orig -> origin/gh/PaliC/18/orig 2025-08-14T21:24:05.8669109Z * [new branch] gh/PaliC/19/base -> origin/gh/PaliC/19/base 2025-08-14T21:24:05.8669411Z * [new branch] gh/PaliC/19/head -> origin/gh/PaliC/19/head 2025-08-14T21:24:05.8669809Z * [new branch] gh/PaliC/19/orig -> origin/gh/PaliC/19/orig 2025-08-14T21:24:05.8670199Z * [new branch] gh/PaliC/2/base -> origin/gh/PaliC/2/base 2025-08-14T21:24:05.8674711Z * [new branch] gh/PaliC/2/head -> origin/gh/PaliC/2/head 2025-08-14T21:24:05.8677474Z * [new branch] gh/PaliC/2/orig -> origin/gh/PaliC/2/orig 2025-08-14T21:24:05.8677872Z * [new branch] gh/PaliC/20/base -> origin/gh/PaliC/20/base 2025-08-14T21:24:05.8678206Z * [new branch] gh/PaliC/20/head -> origin/gh/PaliC/20/head 2025-08-14T21:24:05.8678515Z * [new branch] gh/PaliC/20/orig -> origin/gh/PaliC/20/orig 2025-08-14T21:24:05.8678832Z * [new branch] gh/PaliC/21/base -> origin/gh/PaliC/21/base 2025-08-14T21:24:05.8679149Z * [new branch] gh/PaliC/21/head -> origin/gh/PaliC/21/head 2025-08-14T21:24:05.8679459Z * [new branch] gh/PaliC/21/orig -> origin/gh/PaliC/21/orig 2025-08-14T21:24:05.8679782Z * [new branch] gh/PaliC/22/base -> origin/gh/PaliC/22/base 2025-08-14T21:24:05.8680090Z * [new branch] gh/PaliC/22/head -> origin/gh/PaliC/22/head 2025-08-14T21:24:05.8680424Z * [new branch] gh/PaliC/22/orig -> origin/gh/PaliC/22/orig 2025-08-14T21:24:05.8680762Z * [new branch] gh/PaliC/23/base -> origin/gh/PaliC/23/base 2025-08-14T21:24:05.8681088Z * [new branch] gh/PaliC/23/head -> origin/gh/PaliC/23/head 2025-08-14T21:24:05.8681638Z * [new branch] gh/PaliC/23/orig -> origin/gh/PaliC/23/orig 2025-08-14T21:24:05.8686881Z * [new branch] gh/PaliC/24/base -> origin/gh/PaliC/24/base 2025-08-14T21:24:05.8691137Z * [new branch] gh/PaliC/24/head -> origin/gh/PaliC/24/head 2025-08-14T21:24:05.8693272Z * [new branch] gh/PaliC/24/orig -> origin/gh/PaliC/24/orig 2025-08-14T21:24:05.8694073Z * [new branch] gh/PaulZhang12/17/base -> origin/gh/PaulZhang12/17/base 2025-08-14T21:24:05.8694474Z * [new branch] gh/PaulZhang12/17/head -> origin/gh/PaulZhang12/17/head 2025-08-14T21:24:05.8694846Z * [new branch] gh/PaulZhang12/18/base -> origin/gh/PaulZhang12/18/base 2025-08-14T21:24:05.8695213Z * [new branch] gh/PaulZhang12/18/head -> origin/gh/PaulZhang12/18/head 2025-08-14T21:24:05.8695575Z * [new branch] gh/PaulZhang12/18/orig -> origin/gh/PaulZhang12/18/orig 2025-08-14T21:24:05.8695944Z * [new branch] gh/PaulZhang12/19/base -> origin/gh/PaulZhang12/19/base 2025-08-14T21:24:05.8696311Z * [new branch] gh/PaulZhang12/19/head -> origin/gh/PaulZhang12/19/head 2025-08-14T21:24:05.8696676Z * [new branch] gh/PaulZhang12/19/orig -> origin/gh/PaulZhang12/19/orig 2025-08-14T21:24:05.8697033Z * [new branch] gh/PaulZhang12/20/base -> origin/gh/PaulZhang12/20/base 2025-08-14T21:24:05.8697447Z * [new branch] gh/PaulZhang12/20/head -> origin/gh/PaulZhang12/20/head 2025-08-14T21:24:05.8697808Z * [new branch] gh/PaulZhang12/20/orig -> origin/gh/PaulZhang12/20/orig 2025-08-14T21:24:05.8698174Z * [new branch] gh/PaulZhang12/21/base -> origin/gh/PaulZhang12/21/base 2025-08-14T21:24:05.8698535Z * [new branch] gh/PaulZhang12/21/head -> origin/gh/PaulZhang12/21/head 2025-08-14T21:24:05.8698896Z * [new branch] gh/PaulZhang12/21/orig -> origin/gh/PaulZhang12/21/orig 2025-08-14T21:24:05.8699304Z * [new branch] gh/PaulZhang12/22/base -> origin/gh/PaulZhang12/22/base 2025-08-14T21:24:05.8699672Z * [new branch] gh/PaulZhang12/22/head -> origin/gh/PaulZhang12/22/head 2025-08-14T21:24:05.8700136Z * [new branch] gh/PaulZhang12/22/orig -> origin/gh/PaulZhang12/22/orig 2025-08-14T21:24:05.8700518Z * [new branch] gh/SamGinzburg/11/base -> origin/gh/SamGinzburg/11/base 2025-08-14T21:24:05.8700899Z * [new branch] gh/SamGinzburg/11/head -> origin/gh/SamGinzburg/11/head 2025-08-14T21:24:05.8701301Z * [new branch] gh/Sidharth123-cpu/24/base -> origin/gh/Sidharth123-cpu/24/base 2025-08-14T21:24:05.8701691Z * [new branch] gh/Sidharth123-cpu/25/base -> origin/gh/Sidharth123-cpu/25/base 2025-08-14T21:24:05.8702078Z * [new branch] gh/Sidharth123-cpu/26/base -> origin/gh/Sidharth123-cpu/26/base 2025-08-14T21:24:05.8705204Z * [new branch] gh/Sidharth123-cpu/27/base -> origin/gh/Sidharth123-cpu/27/base 2025-08-14T21:24:05.8705586Z * [new branch] gh/Sidharth123-cpu/42/base -> origin/gh/Sidharth123-cpu/42/base 2025-08-14T21:24:05.8705966Z * [new branch] gh/Sidharth123-cpu/42/head -> origin/gh/Sidharth123-cpu/42/head 2025-08-14T21:24:05.8706345Z * [new branch] gh/Sidharth123-cpu/42/orig -> origin/gh/Sidharth123-cpu/42/orig 2025-08-14T21:24:05.8706729Z * [new branch] gh/Sidharth123-cpu/43/base -> origin/gh/Sidharth123-cpu/43/base 2025-08-14T21:24:05.8707101Z * [new branch] gh/Sidharth123-cpu/43/head -> origin/gh/Sidharth123-cpu/43/head 2025-08-14T21:24:05.8709941Z * [new branch] gh/Sidharth123-cpu/43/orig -> origin/gh/Sidharth123-cpu/43/orig 2025-08-14T21:24:05.8710328Z * [new branch] gh/Sidharth123-cpu/44/base -> origin/gh/Sidharth123-cpu/44/base 2025-08-14T21:24:05.8710707Z * [new branch] gh/Sidharth123-cpu/44/head -> origin/gh/Sidharth123-cpu/44/head 2025-08-14T21:24:05.8711077Z * [new branch] gh/Sidharth123-cpu/44/orig -> origin/gh/Sidharth123-cpu/44/orig 2025-08-14T21:24:05.8711457Z * [new branch] gh/Sidharth123-cpu/45/base -> origin/gh/Sidharth123-cpu/45/base 2025-08-14T21:24:05.8716169Z * [new branch] gh/Sidharth123-cpu/45/head -> origin/gh/Sidharth123-cpu/45/head 2025-08-14T21:24:05.8721254Z * [new branch] gh/Sidharth123-cpu/45/orig -> origin/gh/Sidharth123-cpu/45/orig 2025-08-14T21:24:05.8721722Z * [new branch] gh/StrongerXi/1/base -> origin/gh/StrongerXi/1/base 2025-08-14T21:24:05.8722107Z * [new branch] gh/StrongerXi/1/head -> origin/gh/StrongerXi/1/head 2025-08-14T21:24:05.8722476Z * [new branch] gh/StrongerXi/103/base -> origin/gh/StrongerXi/103/base 2025-08-14T21:24:05.8722845Z * [new branch] gh/StrongerXi/103/head -> origin/gh/StrongerXi/103/head 2025-08-14T21:24:05.8723205Z * [new branch] gh/StrongerXi/103/orig -> origin/gh/StrongerXi/103/orig 2025-08-14T21:24:05.8723554Z * [new branch] gh/StrongerXi/133/base -> origin/gh/StrongerXi/133/base 2025-08-14T21:24:05.8723913Z * [new branch] gh/StrongerXi/133/head -> origin/gh/StrongerXi/133/head 2025-08-14T21:24:05.8724266Z * [new branch] gh/StrongerXi/133/orig -> origin/gh/StrongerXi/133/orig 2025-08-14T21:24:05.8724677Z * [new branch] gh/StrongerXi/134/base -> origin/gh/StrongerXi/134/base 2025-08-14T21:24:05.8725032Z * [new branch] gh/StrongerXi/134/head -> origin/gh/StrongerXi/134/head 2025-08-14T21:24:05.8725385Z * [new branch] gh/StrongerXi/134/orig -> origin/gh/StrongerXi/134/orig 2025-08-14T21:24:05.8725743Z * [new branch] gh/StrongerXi/135/base -> origin/gh/StrongerXi/135/base 2025-08-14T21:24:05.8726099Z * [new branch] gh/StrongerXi/135/head -> origin/gh/StrongerXi/135/head 2025-08-14T21:24:05.8726459Z * [new branch] gh/StrongerXi/135/orig -> origin/gh/StrongerXi/135/orig 2025-08-14T21:24:05.8726854Z * [new branch] gh/StrongerXi/136/base -> origin/gh/StrongerXi/136/base 2025-08-14T21:24:05.8727213Z * [new branch] gh/StrongerXi/136/head -> origin/gh/StrongerXi/136/head 2025-08-14T21:24:05.8727568Z * [new branch] gh/StrongerXi/136/orig -> origin/gh/StrongerXi/136/orig 2025-08-14T21:24:05.8727924Z * [new branch] gh/StrongerXi/137/base -> origin/gh/StrongerXi/137/base 2025-08-14T21:24:05.8728280Z * [new branch] gh/StrongerXi/137/head -> origin/gh/StrongerXi/137/head 2025-08-14T21:24:05.8731435Z * [new branch] gh/StrongerXi/137/orig -> origin/gh/StrongerXi/137/orig 2025-08-14T21:24:05.8735180Z * [new branch] gh/StrongerXi/138/base -> origin/gh/StrongerXi/138/base 2025-08-14T21:24:05.8735573Z * [new branch] gh/StrongerXi/138/head -> origin/gh/StrongerXi/138/head 2025-08-14T21:24:05.8735964Z * [new branch] gh/StrongerXi/138/orig -> origin/gh/StrongerXi/138/orig 2025-08-14T21:24:05.8736413Z * [new branch] gh/StrongerXi/71/base -> origin/gh/StrongerXi/71/base 2025-08-14T21:24:05.8736813Z * [new branch] gh/StrongerXi/71/head -> origin/gh/StrongerXi/71/head 2025-08-14T21:24:05.8737192Z * [new branch] gh/StrongerXi/72/base -> origin/gh/StrongerXi/72/base 2025-08-14T21:24:05.8737548Z * [new branch] gh/StrongerXi/72/head -> origin/gh/StrongerXi/72/head 2025-08-14T21:24:05.8737910Z * [new branch] gh/XilunWu/131/base -> origin/gh/XilunWu/131/base 2025-08-14T21:24:05.8738259Z * [new branch] gh/XilunWu/131/head -> origin/gh/XilunWu/131/head 2025-08-14T21:24:05.8738607Z * [new branch] gh/XilunWu/131/orig -> origin/gh/XilunWu/131/orig 2025-08-14T21:24:05.8738977Z * [new branch] gh/XilunWu/133/base -> origin/gh/XilunWu/133/base 2025-08-14T21:24:05.8739339Z * [new branch] gh/XilunWu/133/head -> origin/gh/XilunWu/133/head 2025-08-14T21:24:05.8739828Z * [new branch] gh/XilunWu/133/orig -> origin/gh/XilunWu/133/orig 2025-08-14T21:24:05.8740208Z * [new branch] gh/XilunWu/136/base -> origin/gh/XilunWu/136/base 2025-08-14T21:24:05.8740729Z * [new branch] gh/XilunWu/136/head -> origin/gh/XilunWu/136/head 2025-08-14T21:24:05.8743471Z * [new branch] gh/XilunWu/136/orig -> origin/gh/XilunWu/136/orig 2025-08-14T21:24:05.8743816Z * [new branch] gh/XilunWu/139/base -> origin/gh/XilunWu/139/base 2025-08-14T21:24:05.8744308Z * [new branch] gh/XilunWu/139/head -> origin/gh/XilunWu/139/head 2025-08-14T21:24:05.8744668Z * [new branch] gh/XilunWu/139/orig -> origin/gh/XilunWu/139/orig 2025-08-14T21:24:05.8745142Z * [new branch] gh/XilunWu/143/base -> origin/gh/XilunWu/143/base 2025-08-14T21:24:05.8745489Z * [new branch] gh/XilunWu/143/head -> origin/gh/XilunWu/143/head 2025-08-14T21:24:05.8746057Z * [new branch] gh/XilunWu/143/orig -> origin/gh/XilunWu/143/orig 2025-08-14T21:24:05.8748223Z * [new branch] gh/XilunWu/144/base -> origin/gh/XilunWu/144/base 2025-08-14T21:24:05.8748691Z * [new branch] gh/XilunWu/144/head -> origin/gh/XilunWu/144/head 2025-08-14T21:24:05.8749282Z * [new branch] gh/XilunWu/144/orig -> origin/gh/XilunWu/144/orig 2025-08-14T21:24:05.8750083Z * [new branch] gh/XilunWu/145/base -> origin/gh/XilunWu/145/base 2025-08-14T21:24:05.8750794Z * [new branch] gh/XilunWu/145/head -> origin/gh/XilunWu/145/head 2025-08-14T21:24:05.8751367Z * [new branch] gh/XilunWu/145/orig -> origin/gh/XilunWu/145/orig 2025-08-14T21:24:05.8751797Z * [new branch] gh/XilunWu/146/base -> origin/gh/XilunWu/146/base 2025-08-14T21:24:05.8752434Z * [new branch] gh/XilunWu/146/head -> origin/gh/XilunWu/146/head 2025-08-14T21:24:05.8753084Z * [new branch] gh/XilunWu/146/orig -> origin/gh/XilunWu/146/orig 2025-08-14T21:24:05.8757358Z * [new branch] gh/XilunWu/147/base -> origin/gh/XilunWu/147/base 2025-08-14T21:24:05.8757929Z * [new branch] gh/XilunWu/147/head -> origin/gh/XilunWu/147/head 2025-08-14T21:24:05.8758431Z * [new branch] gh/XilunWu/147/orig -> origin/gh/XilunWu/147/orig 2025-08-14T21:24:05.8758760Z * [new branch] gh/XilunWu/148/base -> origin/gh/XilunWu/148/base 2025-08-14T21:24:05.8759086Z * [new branch] gh/XilunWu/148/head -> origin/gh/XilunWu/148/head 2025-08-14T21:24:05.8759411Z * [new branch] gh/XilunWu/148/orig -> origin/gh/XilunWu/148/orig 2025-08-14T21:24:05.8759733Z * [new branch] gh/XilunWu/149/base -> origin/gh/XilunWu/149/base 2025-08-14T21:24:05.8760049Z * [new branch] gh/XilunWu/149/head -> origin/gh/XilunWu/149/head 2025-08-14T21:24:05.8760372Z * [new branch] gh/XilunWu/149/orig -> origin/gh/XilunWu/149/orig 2025-08-14T21:24:05.8760700Z * [new branch] gh/XilunWu/150/base -> origin/gh/XilunWu/150/base 2025-08-14T21:24:05.8761221Z * [new branch] gh/XilunWu/150/head -> origin/gh/XilunWu/150/head 2025-08-14T21:24:05.8767488Z * [new branch] gh/XilunWu/150/orig -> origin/gh/XilunWu/150/orig 2025-08-14T21:24:05.8767918Z * [new branch] gh/XilunWu/151/base -> origin/gh/XilunWu/151/base 2025-08-14T21:24:05.8768337Z * [new branch] gh/XilunWu/151/head -> origin/gh/XilunWu/151/head 2025-08-14T21:24:05.8773868Z * [new branch] gh/XilunWu/151/orig -> origin/gh/XilunWu/151/orig 2025-08-14T21:24:05.8775746Z * [new branch] gh/XilunWu/152/base -> origin/gh/XilunWu/152/base 2025-08-14T21:24:05.8776112Z * [new branch] gh/XilunWu/152/head -> origin/gh/XilunWu/152/head 2025-08-14T21:24:05.8776472Z * [new branch] gh/XilunWu/152/orig -> origin/gh/XilunWu/152/orig 2025-08-14T21:24:05.8777024Z * [new branch] gh/XilunWu/153/base -> origin/gh/XilunWu/153/base 2025-08-14T21:24:05.8777378Z * [new branch] gh/XilunWu/153/head -> origin/gh/XilunWu/153/head 2025-08-14T21:24:05.8777717Z * [new branch] gh/XilunWu/153/orig -> origin/gh/XilunWu/153/orig 2025-08-14T21:24:05.8778057Z * [new branch] gh/XilunWu/154/base -> origin/gh/XilunWu/154/base 2025-08-14T21:24:05.8778392Z * [new branch] gh/XilunWu/154/head -> origin/gh/XilunWu/154/head 2025-08-14T21:24:05.8778732Z * [new branch] gh/XilunWu/154/orig -> origin/gh/XilunWu/154/orig 2025-08-14T21:24:05.8779075Z * [new branch] gh/XilunWu/156/base -> origin/gh/XilunWu/156/base 2025-08-14T21:24:05.8779415Z * [new branch] gh/XilunWu/156/head -> origin/gh/XilunWu/156/head 2025-08-14T21:24:05.8779904Z * [new branch] gh/XilunWu/156/orig -> origin/gh/XilunWu/156/orig 2025-08-14T21:24:05.8780331Z * [new branch] gh/XilunWu/157/base -> origin/gh/XilunWu/157/base 2025-08-14T21:24:05.8780691Z * [new branch] gh/XilunWu/157/head -> origin/gh/XilunWu/157/head 2025-08-14T21:24:05.8781018Z * [new branch] gh/XilunWu/157/orig -> origin/gh/XilunWu/157/orig 2025-08-14T21:24:05.8781465Z * [new branch] gh/XilunWu/158/base -> origin/gh/XilunWu/158/base 2025-08-14T21:24:05.8785406Z * [new branch] gh/XilunWu/158/head -> origin/gh/XilunWu/158/head 2025-08-14T21:24:05.8791136Z * [new branch] gh/XilunWu/158/orig -> origin/gh/XilunWu/158/orig 2025-08-14T21:24:05.8796072Z * [new branch] gh/XilunWu/159/base -> origin/gh/XilunWu/159/base 2025-08-14T21:24:05.8796500Z * [new branch] gh/XilunWu/159/head -> origin/gh/XilunWu/159/head 2025-08-14T21:24:05.8796848Z * [new branch] gh/XilunWu/159/orig -> origin/gh/XilunWu/159/orig 2025-08-14T21:24:05.8797240Z * [new branch] gh/XilunWu/160/base -> origin/gh/XilunWu/160/base 2025-08-14T21:24:05.8797591Z * [new branch] gh/XilunWu/160/head -> origin/gh/XilunWu/160/head 2025-08-14T21:24:05.8797934Z * [new branch] gh/XilunWu/160/orig -> origin/gh/XilunWu/160/orig 2025-08-14T21:24:05.8798269Z * [new branch] gh/XilunWu/161/base -> origin/gh/XilunWu/161/base 2025-08-14T21:24:05.8798611Z * [new branch] gh/XilunWu/161/head -> origin/gh/XilunWu/161/head 2025-08-14T21:24:05.8798950Z * [new branch] gh/XilunWu/161/orig -> origin/gh/XilunWu/161/orig 2025-08-14T21:24:05.8799281Z * [new branch] gh/XilunWu/162/base -> origin/gh/XilunWu/162/base 2025-08-14T21:24:05.8799620Z * [new branch] gh/XilunWu/162/head -> origin/gh/XilunWu/162/head 2025-08-14T21:24:05.8799965Z * [new branch] gh/XilunWu/162/orig -> origin/gh/XilunWu/162/orig 2025-08-14T21:24:05.8800308Z * [new branch] gh/XilunWu/163/base -> origin/gh/XilunWu/163/base 2025-08-14T21:24:05.8800642Z * [new branch] gh/XilunWu/163/head -> origin/gh/XilunWu/163/head 2025-08-14T21:24:05.8800980Z * [new branch] gh/XilunWu/163/orig -> origin/gh/XilunWu/163/orig 2025-08-14T21:24:05.8801338Z * [new branch] gh/XuehaiPan/14/base -> origin/gh/XuehaiPan/14/base 2025-08-14T21:24:05.8801702Z * [new branch] gh/XuehaiPan/14/head -> origin/gh/XuehaiPan/14/head 2025-08-14T21:24:05.8802050Z * [new branch] gh/XuehaiPan/14/orig -> origin/gh/XuehaiPan/14/orig 2025-08-14T21:24:05.8802411Z * [new branch] gh/XuehaiPan/179/base -> origin/gh/XuehaiPan/179/base 2025-08-14T21:24:05.8802771Z * [new branch] gh/XuehaiPan/179/head -> origin/gh/XuehaiPan/179/head 2025-08-14T21:24:05.8803138Z * [new branch] gh/XuehaiPan/179/orig -> origin/gh/XuehaiPan/179/orig 2025-08-14T21:24:05.8803628Z * [new branch] gh/XuehaiPan/189/base -> origin/gh/XuehaiPan/189/base 2025-08-14T21:24:05.8803992Z * [new branch] gh/XuehaiPan/189/head -> origin/gh/XuehaiPan/189/head 2025-08-14T21:24:05.8804375Z * [new branch] gh/XuehaiPan/189/orig -> origin/gh/XuehaiPan/189/orig 2025-08-14T21:24:05.8804731Z * [new branch] gh/XuehaiPan/227/base -> origin/gh/XuehaiPan/227/base 2025-08-14T21:24:05.8805087Z * [new branch] gh/XuehaiPan/227/head -> origin/gh/XuehaiPan/227/head 2025-08-14T21:24:05.8805442Z * [new branch] gh/XuehaiPan/227/orig -> origin/gh/XuehaiPan/227/orig 2025-08-14T21:24:05.8805797Z * [new branch] gh/XuehaiPan/231/base -> origin/gh/XuehaiPan/231/base 2025-08-14T21:24:05.8806144Z * [new branch] gh/XuehaiPan/231/head -> origin/gh/XuehaiPan/231/head 2025-08-14T21:24:05.8806550Z * [new branch] gh/XuehaiPan/231/orig -> origin/gh/XuehaiPan/231/orig 2025-08-14T21:24:05.8806903Z * [new branch] gh/XuehaiPan/232/base -> origin/gh/XuehaiPan/232/base 2025-08-14T21:24:05.8807731Z * [new branch] gh/XuehaiPan/232/head -> origin/gh/XuehaiPan/232/head 2025-08-14T21:24:05.8808085Z * [new branch] gh/XuehaiPan/232/orig -> origin/gh/XuehaiPan/232/orig 2025-08-14T21:24:05.8808450Z * [new branch] gh/XuehaiPan/249/base -> origin/gh/XuehaiPan/249/base 2025-08-14T21:24:05.8808807Z * [new branch] gh/XuehaiPan/249/head -> origin/gh/XuehaiPan/249/head 2025-08-14T21:24:05.8811661Z * [new branch] gh/XuehaiPan/249/orig -> origin/gh/XuehaiPan/249/orig 2025-08-14T21:24:05.8812015Z * [new branch] gh/XuehaiPan/253/base -> origin/gh/XuehaiPan/253/base 2025-08-14T21:24:05.8812377Z * [new branch] gh/XuehaiPan/253/head -> origin/gh/XuehaiPan/253/head 2025-08-14T21:24:05.8812718Z * [new branch] gh/XuehaiPan/253/orig -> origin/gh/XuehaiPan/253/orig 2025-08-14T21:24:05.8813057Z * [new branch] gh/XuehaiPan/254/base -> origin/gh/XuehaiPan/254/base 2025-08-14T21:24:05.8813408Z * [new branch] gh/XuehaiPan/254/head -> origin/gh/XuehaiPan/254/head 2025-08-14T21:24:05.8813759Z * [new branch] gh/XuehaiPan/254/orig -> origin/gh/XuehaiPan/254/orig 2025-08-14T21:24:05.8814124Z * [new branch] gh/XuehaiPan/255/base -> origin/gh/XuehaiPan/255/base 2025-08-14T21:24:05.8814901Z * [new branch] gh/XuehaiPan/255/head -> origin/gh/XuehaiPan/255/head 2025-08-14T21:24:05.8815537Z * [new branch] gh/XuehaiPan/255/orig -> origin/gh/XuehaiPan/255/orig 2025-08-14T21:24:05.8816697Z * [new branch] gh/XuehaiPan/257/base -> origin/gh/XuehaiPan/257/base 2025-08-14T21:24:05.8817065Z * [new branch] gh/XuehaiPan/257/head -> origin/gh/XuehaiPan/257/head 2025-08-14T21:24:05.8817807Z * [new branch] gh/XuehaiPan/257/orig -> origin/gh/XuehaiPan/257/orig 2025-08-14T21:24:05.8819120Z * [new branch] gh/XuehaiPan/271/base -> origin/gh/XuehaiPan/271/base 2025-08-14T21:24:05.8819487Z * [new branch] gh/XuehaiPan/271/head -> origin/gh/XuehaiPan/271/head 2025-08-14T21:24:05.8820190Z * [new branch] gh/XuehaiPan/271/orig -> origin/gh/XuehaiPan/271/orig 2025-08-14T21:24:05.8821511Z * [new branch] gh/XuehaiPan/283/base -> origin/gh/XuehaiPan/283/base 2025-08-14T21:24:05.8821879Z * [new branch] gh/XuehaiPan/283/head -> origin/gh/XuehaiPan/283/head 2025-08-14T21:24:05.8822690Z * [new branch] gh/XuehaiPan/283/orig -> origin/gh/XuehaiPan/283/orig 2025-08-14T21:24:05.8823879Z * [new branch] gh/XuehaiPan/290/base -> origin/gh/XuehaiPan/290/base 2025-08-14T21:24:05.8824326Z * [new branch] gh/XuehaiPan/290/head -> origin/gh/XuehaiPan/290/head 2025-08-14T21:24:05.8824872Z * [new branch] gh/XuehaiPan/290/orig -> origin/gh/XuehaiPan/290/orig 2025-08-14T21:24:05.8826229Z * [new branch] gh/XuehaiPan/328/base -> origin/gh/XuehaiPan/328/base 2025-08-14T21:24:05.8826584Z * [new branch] gh/XuehaiPan/328/head -> origin/gh/XuehaiPan/328/head 2025-08-14T21:24:05.8827298Z * [new branch] gh/XuehaiPan/328/orig -> origin/gh/XuehaiPan/328/orig 2025-08-14T21:24:05.8828582Z * [new branch] gh/XuehaiPan/339/base -> origin/gh/XuehaiPan/339/base 2025-08-14T21:24:05.8829263Z * [new branch] gh/XuehaiPan/339/head -> origin/gh/XuehaiPan/339/head 2025-08-14T21:24:05.8829733Z * [new branch] gh/XuehaiPan/339/orig -> origin/gh/XuehaiPan/339/orig 2025-08-14T21:24:05.8831031Z * [new branch] gh/XuehaiPan/343/base -> origin/gh/XuehaiPan/343/base 2025-08-14T21:24:05.8831576Z * [new branch] gh/XuehaiPan/343/head -> origin/gh/XuehaiPan/343/head 2025-08-14T21:24:05.8832180Z * [new branch] gh/XuehaiPan/343/orig -> origin/gh/XuehaiPan/343/orig 2025-08-14T21:24:05.8833751Z * [new branch] gh/XuehaiPan/344/base -> origin/gh/XuehaiPan/344/base 2025-08-14T21:24:05.8834127Z * [new branch] gh/XuehaiPan/344/head -> origin/gh/XuehaiPan/344/head 2025-08-14T21:24:05.8834968Z * [new branch] gh/XuehaiPan/344/orig -> origin/gh/XuehaiPan/344/orig 2025-08-14T21:24:05.8836186Z * [new branch] gh/XuehaiPan/345/base -> origin/gh/XuehaiPan/345/base 2025-08-14T21:24:05.8836566Z * [new branch] gh/XuehaiPan/345/head -> origin/gh/XuehaiPan/345/head 2025-08-14T21:24:05.8837164Z * [new branch] gh/XuehaiPan/345/orig -> origin/gh/XuehaiPan/345/orig 2025-08-14T21:24:05.8838372Z * [new branch] gh/XuehaiPan/346/base -> origin/gh/XuehaiPan/346/base 2025-08-14T21:24:05.8838840Z * [new branch] gh/XuehaiPan/346/head -> origin/gh/XuehaiPan/346/head 2025-08-14T21:24:05.8839924Z * [new branch] gh/XuehaiPan/346/orig -> origin/gh/XuehaiPan/346/orig 2025-08-14T21:24:05.8840782Z * [new branch] gh/XuehaiPan/347/base -> origin/gh/XuehaiPan/347/base 2025-08-14T21:24:05.8841414Z * [new branch] gh/XuehaiPan/347/head -> origin/gh/XuehaiPan/347/head 2025-08-14T21:24:05.8842208Z * [new branch] gh/XuehaiPan/347/orig -> origin/gh/XuehaiPan/347/orig 2025-08-14T21:24:05.8843424Z * [new branch] gh/XuehaiPan/348/base -> origin/gh/XuehaiPan/348/base 2025-08-14T21:24:05.8843897Z * [new branch] gh/XuehaiPan/348/head -> origin/gh/XuehaiPan/348/head 2025-08-14T21:24:05.8844619Z * [new branch] gh/XuehaiPan/348/orig -> origin/gh/XuehaiPan/348/orig 2025-08-14T21:24:05.8845863Z * [new branch] gh/XuehaiPan/350/base -> origin/gh/XuehaiPan/350/base 2025-08-14T21:24:05.8846285Z * [new branch] gh/XuehaiPan/350/head -> origin/gh/XuehaiPan/350/head 2025-08-14T21:24:05.8846896Z * [new branch] gh/XuehaiPan/350/orig -> origin/gh/XuehaiPan/350/orig 2025-08-14T21:24:05.8848048Z * [new branch] gh/XuehaiPan/352/base -> origin/gh/XuehaiPan/352/base 2025-08-14T21:24:05.8848499Z * [new branch] gh/XuehaiPan/352/head -> origin/gh/XuehaiPan/352/head 2025-08-14T21:24:05.8849170Z * [new branch] gh/XuehaiPan/352/orig -> origin/gh/XuehaiPan/352/orig 2025-08-14T21:24:05.8850476Z * [new branch] gh/XuehaiPan/356/base -> origin/gh/XuehaiPan/356/base 2025-08-14T21:24:05.8850903Z * [new branch] gh/XuehaiPan/356/head -> origin/gh/XuehaiPan/356/head 2025-08-14T21:24:05.8851619Z * [new branch] gh/XuehaiPan/356/orig -> origin/gh/XuehaiPan/356/orig 2025-08-14T21:24:05.8852803Z * [new branch] gh/XuehaiPan/357/base -> origin/gh/XuehaiPan/357/base 2025-08-14T21:24:05.8853140Z * [new branch] gh/XuehaiPan/357/head -> origin/gh/XuehaiPan/357/head 2025-08-14T21:24:05.8853834Z * [new branch] gh/XuehaiPan/357/orig -> origin/gh/XuehaiPan/357/orig 2025-08-14T21:24:05.8854988Z * [new branch] gh/XuehaiPan/358/base -> origin/gh/XuehaiPan/358/base 2025-08-14T21:24:05.8855348Z * [new branch] gh/XuehaiPan/358/head -> origin/gh/XuehaiPan/358/head 2025-08-14T21:24:05.8856178Z * [new branch] gh/XuehaiPan/358/orig -> origin/gh/XuehaiPan/358/orig 2025-08-14T21:24:05.8857313Z * [new branch] gh/XuehaiPan/359/base -> origin/gh/XuehaiPan/359/base 2025-08-14T21:24:05.8857678Z * [new branch] gh/XuehaiPan/359/head -> origin/gh/XuehaiPan/359/head 2025-08-14T21:24:05.8858543Z * [new branch] gh/XuehaiPan/359/orig -> origin/gh/XuehaiPan/359/orig 2025-08-14T21:24:05.8859802Z * [new branch] gh/XuehaiPan/360/base -> origin/gh/XuehaiPan/360/base 2025-08-14T21:24:05.8861059Z * [new branch] gh/XuehaiPan/360/head -> origin/gh/XuehaiPan/360/head 2025-08-14T21:24:05.8861569Z * [new branch] gh/XuehaiPan/360/orig -> origin/gh/XuehaiPan/360/orig 2025-08-14T21:24:05.8862785Z * [new branch] gh/XuehaiPan/365/base -> origin/gh/XuehaiPan/365/base 2025-08-14T21:24:05.8863217Z * [new branch] gh/XuehaiPan/365/head -> origin/gh/XuehaiPan/365/head 2025-08-14T21:24:05.8863918Z * [new branch] gh/XuehaiPan/365/orig -> origin/gh/XuehaiPan/365/orig 2025-08-14T21:24:05.8865152Z * [new branch] gh/XuehaiPan/366/base -> origin/gh/XuehaiPan/366/base 2025-08-14T21:24:05.8865507Z * [new branch] gh/XuehaiPan/366/head -> origin/gh/XuehaiPan/366/head 2025-08-14T21:24:05.8866760Z * [new branch] gh/XuehaiPan/368/base -> origin/gh/XuehaiPan/368/base 2025-08-14T21:24:05.8867120Z * [new branch] gh/XuehaiPan/368/head -> origin/gh/XuehaiPan/368/head 2025-08-14T21:24:05.8867898Z * [new branch] gh/XuehaiPan/368/orig -> origin/gh/XuehaiPan/368/orig 2025-08-14T21:24:05.8869159Z * [new branch] gh/XuehaiPan/369/base -> origin/gh/XuehaiPan/369/base 2025-08-14T21:24:05.8869523Z * [new branch] gh/XuehaiPan/369/head -> origin/gh/XuehaiPan/369/head 2025-08-14T21:24:05.8870228Z * [new branch] gh/XuehaiPan/369/orig -> origin/gh/XuehaiPan/369/orig 2025-08-14T21:24:05.8871440Z * [new branch] gh/XuehaiPan/370/base -> origin/gh/XuehaiPan/370/base 2025-08-14T21:24:05.8871922Z * [new branch] gh/XuehaiPan/370/head -> origin/gh/XuehaiPan/370/head 2025-08-14T21:24:05.8872510Z * [new branch] gh/XuehaiPan/370/orig -> origin/gh/XuehaiPan/370/orig 2025-08-14T21:24:05.8873687Z * [new branch] gh/XuehaiPan/371/base -> origin/gh/XuehaiPan/371/base 2025-08-14T21:24:05.8874046Z * [new branch] gh/XuehaiPan/371/head -> origin/gh/XuehaiPan/371/head 2025-08-14T21:24:05.8874769Z * [new branch] gh/XuehaiPan/371/orig -> origin/gh/XuehaiPan/371/orig 2025-08-14T21:24:05.8875933Z * [new branch] gh/XuehaiPan/372/base -> origin/gh/XuehaiPan/372/base 2025-08-14T21:24:05.8876290Z * [new branch] gh/XuehaiPan/372/head -> origin/gh/XuehaiPan/372/head 2025-08-14T21:24:05.8877003Z * [new branch] gh/XuehaiPan/372/orig -> origin/gh/XuehaiPan/372/orig 2025-08-14T21:24:05.8878161Z * [new branch] gh/XuehaiPan/373/base -> origin/gh/XuehaiPan/373/base 2025-08-14T21:24:05.8878533Z * [new branch] gh/XuehaiPan/373/head -> origin/gh/XuehaiPan/373/head 2025-08-14T21:24:05.8879330Z * [new branch] gh/XuehaiPan/373/orig -> origin/gh/XuehaiPan/373/orig 2025-08-14T21:24:05.8880694Z * [new branch] gh/XuehaiPan/374/base -> origin/gh/XuehaiPan/374/base 2025-08-14T21:24:05.8881058Z * [new branch] gh/XuehaiPan/374/head -> origin/gh/XuehaiPan/374/head 2025-08-14T21:24:05.8881588Z * [new branch] gh/XuehaiPan/374/orig -> origin/gh/XuehaiPan/374/orig 2025-08-14T21:24:05.8882722Z * [new branch] gh/XuehaiPan/375/base -> origin/gh/XuehaiPan/375/base 2025-08-14T21:24:05.8883226Z * [new branch] gh/XuehaiPan/375/head -> origin/gh/XuehaiPan/375/head 2025-08-14T21:24:05.8883969Z * [new branch] gh/XuehaiPan/375/orig -> origin/gh/XuehaiPan/375/orig 2025-08-14T21:24:05.8885087Z * [new branch] gh/XuehaiPan/376/base -> origin/gh/XuehaiPan/376/base 2025-08-14T21:24:05.8885552Z * [new branch] gh/XuehaiPan/376/head -> origin/gh/XuehaiPan/376/head 2025-08-14T21:24:05.8886205Z * [new branch] gh/XuehaiPan/376/orig -> origin/gh/XuehaiPan/376/orig 2025-08-14T21:24:05.8887473Z * [new branch] gh/XuehaiPan/377/base -> origin/gh/XuehaiPan/377/base 2025-08-14T21:24:05.8887835Z * [new branch] gh/XuehaiPan/377/head -> origin/gh/XuehaiPan/377/head 2025-08-14T21:24:05.8888556Z * [new branch] gh/XuehaiPan/377/orig -> origin/gh/XuehaiPan/377/orig 2025-08-14T21:24:05.8889791Z * [new branch] gh/XuehaiPan/378/base -> origin/gh/XuehaiPan/378/base 2025-08-14T21:24:05.8890152Z * [new branch] gh/XuehaiPan/378/head -> origin/gh/XuehaiPan/378/head 2025-08-14T21:24:05.8890963Z * [new branch] gh/XuehaiPan/378/orig -> origin/gh/XuehaiPan/378/orig 2025-08-14T21:24:05.8892434Z * [new branch] gh/XuehaiPan/379/base -> origin/gh/XuehaiPan/379/base 2025-08-14T21:24:05.8892790Z * [new branch] gh/XuehaiPan/379/head -> origin/gh/XuehaiPan/379/head 2025-08-14T21:24:05.8893173Z * [new branch] gh/XuehaiPan/379/orig -> origin/gh/XuehaiPan/379/orig 2025-08-14T21:24:05.8894674Z * [new branch] gh/ZhiweiYan-96/39/base -> origin/gh/ZhiweiYan-96/39/base 2025-08-14T21:24:05.8895062Z * [new branch] gh/ZhiweiYan-96/39/head -> origin/gh/ZhiweiYan-96/39/head 2025-08-14T21:24:05.8895908Z * [new branch] gh/ZhiweiYan-96/39/orig -> origin/gh/ZhiweiYan-96/39/orig 2025-08-14T21:24:05.8897122Z * [new branch] gh/ZhiweiYan-96/44/base -> origin/gh/ZhiweiYan-96/44/base 2025-08-14T21:24:05.8897490Z * [new branch] gh/ZhiweiYan-96/44/head -> origin/gh/ZhiweiYan-96/44/head 2025-08-14T21:24:05.8898651Z * [new branch] gh/ZhiweiYan-96/45/base -> origin/gh/ZhiweiYan-96/45/base 2025-08-14T21:24:05.8899017Z * [new branch] gh/ZhiweiYan-96/45/head -> origin/gh/ZhiweiYan-96/45/head 2025-08-14T21:24:05.8900292Z * [new branch] gh/ZhiweiYan-96/49/base -> origin/gh/ZhiweiYan-96/49/base 2025-08-14T21:24:05.8900814Z * [new branch] gh/ZhiweiYan-96/49/head -> origin/gh/ZhiweiYan-96/49/head 2025-08-14T21:24:05.8902021Z * [new branch] gh/ZhiweiYan-96/62/base -> origin/gh/ZhiweiYan-96/62/base 2025-08-14T21:24:05.8902505Z * [new branch] gh/ZhiweiYan-96/62/head -> origin/gh/ZhiweiYan-96/62/head 2025-08-14T21:24:05.8903679Z * [new branch] gh/ZhiweiYan-96/64/base -> origin/gh/ZhiweiYan-96/64/base 2025-08-14T21:24:05.8904115Z * [new branch] gh/ZhiweiYan-96/64/head -> origin/gh/ZhiweiYan-96/64/head 2025-08-14T21:24:05.8904921Z * [new branch] gh/ZhiweiYan-96/64/orig -> origin/gh/ZhiweiYan-96/64/orig 2025-08-14T21:24:05.8906004Z * [new branch] gh/ZhiweiYan-96/65/base -> origin/gh/ZhiweiYan-96/65/base 2025-08-14T21:24:05.8907221Z * [new branch] gh/ZhiweiYan-96/65/head -> origin/gh/ZhiweiYan-96/65/head 2025-08-14T21:24:05.8907664Z * [new branch] gh/ZhiweiYan-96/65/orig -> origin/gh/ZhiweiYan-96/65/orig 2025-08-14T21:24:05.8908492Z * [new branch] gh/ZhiweiYan-96/66/base -> origin/gh/ZhiweiYan-96/66/base 2025-08-14T21:24:05.8908932Z * [new branch] gh/ZhiweiYan-96/66/head -> origin/gh/ZhiweiYan-96/66/head 2025-08-14T21:24:05.8909656Z * [new branch] gh/ZhiweiYan-96/67/base -> origin/gh/ZhiweiYan-96/67/base 2025-08-14T21:24:05.8910334Z * [new branch] gh/ZhiweiYan-96/67/head -> origin/gh/ZhiweiYan-96/67/head 2025-08-14T21:24:05.8911162Z * [new branch] gh/ZhiweiYan-96/68/base -> origin/gh/ZhiweiYan-96/68/base 2025-08-14T21:24:05.8911781Z * [new branch] gh/ZhiweiYan-96/68/head -> origin/gh/ZhiweiYan-96/68/head 2025-08-14T21:24:05.8912555Z * [new branch] gh/ZhiweiYan-96/68/orig -> origin/gh/ZhiweiYan-96/68/orig 2025-08-14T21:24:05.8913819Z * [new branch] gh/aakhundov/1/base -> origin/gh/aakhundov/1/base 2025-08-14T21:24:05.8914212Z * [new branch] gh/aakhundov/1/head -> origin/gh/aakhundov/1/head 2025-08-14T21:24:05.8915450Z * [new branch] gh/aakhundov/2/base -> origin/gh/aakhundov/2/base 2025-08-14T21:24:05.8915826Z * [new branch] gh/aakhundov/2/head -> origin/gh/aakhundov/2/head 2025-08-14T21:24:05.8917109Z * [new branch] gh/aditew01/openblas -> origin/gh/aditew01/openblas 2025-08-14T21:24:05.8917489Z * [new branch] gh/aditew01/sbgemm -> origin/gh/aditew01/sbgemm 2025-08-14T21:24:05.8918239Z * [new branch] gh/aditew01/vecbf16 -> origin/gh/aditew01/vecbf16 2025-08-14T21:24:05.8919592Z * [new branch] gh/alexbrauckmann/paddedtensor_faketensor_init -> origin/gh/alexbrauckmann/paddedtensor_faketensor_init 2025-08-14T21:24:05.8920142Z * [new branch] gh/alexbrauckmann/paddedtensor_init -> origin/gh/alexbrauckmann/paddedtensor_init 2025-08-14T21:24:05.8920671Z * [new branch] gh/alexbrauckmann/paddedtensor_meta_init -> origin/gh/alexbrauckmann/paddedtensor_meta_init 2025-08-14T21:24:05.8921654Z * [new branch] gh/alexsamardzic/7/base -> origin/gh/alexsamardzic/7/base 2025-08-14T21:24:05.8922360Z * [new branch] gh/alexsamardzic/7/head -> origin/gh/alexsamardzic/7/head 2025-08-14T21:24:05.8923029Z * [new branch] gh/alexsamardzic/7/orig -> origin/gh/alexsamardzic/7/orig 2025-08-14T21:24:05.8923922Z * [new branch] gh/alexsamardzic/8/base -> origin/gh/alexsamardzic/8/base 2025-08-14T21:24:05.8926086Z * [new branch] gh/alexsamardzic/8/head -> origin/gh/alexsamardzic/8/head 2025-08-14T21:24:05.8927291Z * [new branch] gh/alexsamardzic/8/orig -> origin/gh/alexsamardzic/8/orig 2025-08-14T21:24:05.8927653Z * [new branch] gh/amjames/18/base -> origin/gh/amjames/18/base 2025-08-14T21:24:05.8927983Z * [new branch] gh/amjames/18/head -> origin/gh/amjames/18/head 2025-08-14T21:24:05.8928346Z * [new branch] gh/amjames/18/orig -> origin/gh/amjames/18/orig 2025-08-14T21:24:05.8928948Z * [new branch] gh/andrewor14/35/base -> origin/gh/andrewor14/35/base 2025-08-14T21:24:05.8929339Z * [new branch] gh/andrewor14/35/head -> origin/gh/andrewor14/35/head 2025-08-14T21:24:05.8930103Z * [new branch] gh/andrewor14/35/orig -> origin/gh/andrewor14/35/orig 2025-08-14T21:24:05.8931328Z * [new branch] gh/andrewor14/50/base -> origin/gh/andrewor14/50/base 2025-08-14T21:24:05.8931899Z * [new branch] gh/andrewor14/50/head -> origin/gh/andrewor14/50/head 2025-08-14T21:24:05.8932829Z * [new branch] gh/andrewor14/50/orig -> origin/gh/andrewor14/50/orig 2025-08-14T21:24:05.8934266Z * [new branch] gh/andyanwang/1/base -> origin/gh/andyanwang/1/base 2025-08-14T21:24:05.8934725Z * [new branch] gh/andyanwang/1/head -> origin/gh/andyanwang/1/head 2025-08-14T21:24:05.8935456Z * [new branch] gh/andyanwang/1/orig -> origin/gh/andyanwang/1/orig 2025-08-14T21:24:05.8936704Z * [new branch] gh/andyanwang/13/base -> origin/gh/andyanwang/13/base 2025-08-14T21:24:05.8937214Z * [new branch] gh/andyanwang/13/head -> origin/gh/andyanwang/13/head 2025-08-14T21:24:05.8937896Z * [new branch] gh/andyanwang/13/orig -> origin/gh/andyanwang/13/orig 2025-08-14T21:24:05.8939117Z * [new branch] gh/andyanwang/2/base -> origin/gh/andyanwang/2/base 2025-08-14T21:24:05.8939629Z * [new branch] gh/andyanwang/2/head -> origin/gh/andyanwang/2/head 2025-08-14T21:24:05.8940475Z * [new branch] gh/andyanwang/2/orig -> origin/gh/andyanwang/2/orig 2025-08-14T21:24:05.8942008Z * [new branch] gh/andyanwang/28/base -> origin/gh/andyanwang/28/base 2025-08-14T21:24:05.8942475Z * [new branch] gh/andyanwang/28/head -> origin/gh/andyanwang/28/head 2025-08-14T21:24:05.8943174Z * [new branch] gh/andyanwang/28/orig -> origin/gh/andyanwang/28/orig 2025-08-14T21:24:05.8944022Z * [new branch] gh/andyanwang/3/base -> origin/gh/andyanwang/3/base 2025-08-14T21:24:05.8944715Z * [new branch] gh/andyanwang/3/head -> origin/gh/andyanwang/3/head 2025-08-14T21:24:05.8945476Z * [new branch] gh/andyanwang/3/orig -> origin/gh/andyanwang/3/orig 2025-08-14T21:24:05.8946611Z * [new branch] gh/andyanwang/30/base -> origin/gh/andyanwang/30/base 2025-08-14T21:24:05.8947284Z * [new branch] gh/andyanwang/30/orig -> origin/gh/andyanwang/30/orig 2025-08-14T21:24:05.8948586Z * [new branch] gh/andyanwang/31/base -> origin/gh/andyanwang/31/base 2025-08-14T21:24:05.8949222Z * [new branch] gh/andyanwang/31/orig -> origin/gh/andyanwang/31/orig 2025-08-14T21:24:05.8950710Z * [new branch] gh/andyanwang/32/base -> origin/gh/andyanwang/32/base 2025-08-14T21:24:05.8951194Z * [new branch] gh/andyanwang/32/head -> origin/gh/andyanwang/32/head 2025-08-14T21:24:05.8951893Z * [new branch] gh/andyanwang/32/orig -> origin/gh/andyanwang/32/orig 2025-08-14T21:24:05.8954097Z * [new branch] gh/andyanwang/33/base -> origin/gh/andyanwang/33/base 2025-08-14T21:24:05.8954540Z * [new branch] gh/andyanwang/33/head -> origin/gh/andyanwang/33/head 2025-08-14T21:24:05.8954918Z * [new branch] gh/andyanwang/33/orig -> origin/gh/andyanwang/33/orig 2025-08-14T21:24:05.8955379Z * [new branch] gh/andyanwang/34/base -> origin/gh/andyanwang/34/base 2025-08-14T21:24:05.8956167Z * [new branch] gh/andyanwang/34/head -> origin/gh/andyanwang/34/head 2025-08-14T21:24:05.8956950Z * [new branch] gh/andyanwang/34/orig -> origin/gh/andyanwang/34/orig 2025-08-14T21:24:05.8958172Z * [new branch] gh/andyanwang/35/base -> origin/gh/andyanwang/35/base 2025-08-14T21:24:05.8959000Z * [new branch] gh/andyanwang/35/head -> origin/gh/andyanwang/35/head 2025-08-14T21:24:05.8959512Z * [new branch] gh/andyanwang/35/orig -> origin/gh/andyanwang/35/orig 2025-08-14T21:24:05.8961096Z * [new branch] gh/andyanwang/36/base -> origin/gh/andyanwang/36/base 2025-08-14T21:24:05.8963089Z * [new branch] gh/andyanwang/36/head -> origin/gh/andyanwang/36/head 2025-08-14T21:24:05.8965630Z * [new branch] gh/andyanwang/36/orig -> origin/gh/andyanwang/36/orig 2025-08-14T21:24:05.8965995Z * [new branch] gh/andyanwang/37/base -> origin/gh/andyanwang/37/base 2025-08-14T21:24:05.8966361Z * [new branch] gh/andyanwang/37/head -> origin/gh/andyanwang/37/head 2025-08-14T21:24:05.8967046Z * [new branch] gh/andyanwang/37/orig -> origin/gh/andyanwang/37/orig 2025-08-14T21:24:05.8969238Z * [new branch] gh/andyanwang/38/base -> origin/gh/andyanwang/38/base 2025-08-14T21:24:05.8969597Z * [new branch] gh/andyanwang/38/head -> origin/gh/andyanwang/38/head 2025-08-14T21:24:05.8969960Z * [new branch] gh/andyanwang/38/orig -> origin/gh/andyanwang/38/orig 2025-08-14T21:24:05.8970318Z * [new branch] gh/andyanwang/39/base -> origin/gh/andyanwang/39/base 2025-08-14T21:24:05.8970665Z * [new branch] gh/andyanwang/39/head -> origin/gh/andyanwang/39/head 2025-08-14T21:24:05.8971021Z * [new branch] gh/andyanwang/39/orig -> origin/gh/andyanwang/39/orig 2025-08-14T21:24:05.8975512Z * [new branch] gh/andyanwang/4/base -> origin/gh/andyanwang/4/base 2025-08-14T21:24:05.8975966Z * [new branch] gh/andyanwang/4/head -> origin/gh/andyanwang/4/head 2025-08-14T21:24:05.8976318Z * [new branch] gh/andyanwang/4/orig -> origin/gh/andyanwang/4/orig 2025-08-14T21:24:05.8976678Z * [new branch] gh/andyanwang/40/base -> origin/gh/andyanwang/40/base 2025-08-14T21:24:05.8977046Z * [new branch] gh/andyanwang/40/head -> origin/gh/andyanwang/40/head 2025-08-14T21:24:05.8977403Z * [new branch] gh/andyanwang/40/orig -> origin/gh/andyanwang/40/orig 2025-08-14T21:24:05.8977754Z * [new branch] gh/angelayi/102/base -> origin/gh/angelayi/102/base 2025-08-14T21:24:05.8978113Z * [new branch] gh/angelayi/102/head -> origin/gh/angelayi/102/head 2025-08-14T21:24:05.8978464Z * [new branch] gh/angelayi/102/orig -> origin/gh/angelayi/102/orig 2025-08-14T21:24:05.8978812Z * [new branch] gh/angelayi/103/base -> origin/gh/angelayi/103/base 2025-08-14T21:24:05.8979175Z * [new branch] gh/angelayi/103/head -> origin/gh/angelayi/103/head 2025-08-14T21:24:05.8979524Z * [new branch] gh/angelayi/103/orig -> origin/gh/angelayi/103/orig 2025-08-14T21:24:05.8981427Z * [new branch] gh/angelayi/104/base -> origin/gh/angelayi/104/base 2025-08-14T21:24:05.8981962Z * [new branch] gh/angelayi/104/head -> origin/gh/angelayi/104/head 2025-08-14T21:24:05.8984918Z * [new branch] gh/angelayi/104/orig -> origin/gh/angelayi/104/orig 2025-08-14T21:24:05.8985344Z * [new branch] gh/angelayi/105/base -> origin/gh/angelayi/105/base 2025-08-14T21:24:05.8985725Z * [new branch] gh/angelayi/105/head -> origin/gh/angelayi/105/head 2025-08-14T21:24:05.8986094Z * [new branch] gh/angelayi/105/orig -> origin/gh/angelayi/105/orig 2025-08-14T21:24:05.8986468Z * [new branch] gh/angelayi/106/base -> origin/gh/angelayi/106/base 2025-08-14T21:24:05.8988204Z * [new branch] gh/angelayi/106/head -> origin/gh/angelayi/106/head 2025-08-14T21:24:05.8988709Z * [new branch] gh/angelayi/106/orig -> origin/gh/angelayi/106/orig 2025-08-14T21:24:05.8989218Z * [new branch] gh/angelayi/107/base -> origin/gh/angelayi/107/base 2025-08-14T21:24:05.8990104Z * [new branch] gh/angelayi/107/head -> origin/gh/angelayi/107/head 2025-08-14T21:24:05.8990547Z * [new branch] gh/angelayi/108/base -> origin/gh/angelayi/108/base 2025-08-14T21:24:05.8990927Z * [new branch] gh/angelayi/108/head -> origin/gh/angelayi/108/head 2025-08-14T21:24:05.8991298Z * [new branch] gh/angelayi/108/orig -> origin/gh/angelayi/108/orig 2025-08-14T21:24:05.8991669Z * [new branch] gh/angelayi/109/base -> origin/gh/angelayi/109/base 2025-08-14T21:24:05.8992096Z * [new branch] gh/angelayi/109/head -> origin/gh/angelayi/109/head 2025-08-14T21:24:05.8992697Z * [new branch] gh/angelayi/109/orig -> origin/gh/angelayi/109/orig 2025-08-14T21:24:05.8993373Z * [new branch] gh/angelayi/110/base -> origin/gh/angelayi/110/base 2025-08-14T21:24:05.8994049Z * [new branch] gh/angelayi/110/head -> origin/gh/angelayi/110/head 2025-08-14T21:24:05.8994756Z * [new branch] gh/angelayi/110/orig -> origin/gh/angelayi/110/orig 2025-08-14T21:24:05.8995748Z * [new branch] gh/angelayi/97/base -> origin/gh/angelayi/97/base 2025-08-14T21:24:05.8996357Z * [new branch] gh/angelayi/97/head -> origin/gh/angelayi/97/head 2025-08-14T21:24:05.8997076Z * [new branch] gh/angelayi/97/orig -> origin/gh/angelayi/97/orig 2025-08-14T21:24:05.8998580Z * [new branch] gh/ani300/1/base -> origin/gh/ani300/1/base 2025-08-14T21:24:05.8999022Z * [new branch] gh/ani300/1/head -> origin/gh/ani300/1/head 2025-08-14T21:24:05.8999805Z * [new branch] gh/ani300/1/orig -> origin/gh/ani300/1/orig 2025-08-14T21:24:05.9001372Z * [new branch] gh/anijain2305/753/base -> origin/gh/anijain2305/753/base 2025-08-14T21:24:05.9001766Z * [new branch] gh/anijain2305/753/head -> origin/gh/anijain2305/753/head 2025-08-14T21:24:05.9002552Z * [new branch] gh/anijain2305/753/orig -> origin/gh/anijain2305/753/orig 2025-08-14T21:24:05.9003889Z * [new branch] gh/anijain2305/766/base -> origin/gh/anijain2305/766/base 2025-08-14T21:24:05.9004258Z * [new branch] gh/anijain2305/766/head -> origin/gh/anijain2305/766/head 2025-08-14T21:24:05.9004997Z * [new branch] gh/anijain2305/766/orig -> origin/gh/anijain2305/766/orig 2025-08-14T21:24:05.9006226Z * [new branch] gh/anijain2305/790/base -> origin/gh/anijain2305/790/base 2025-08-14T21:24:05.9006608Z * [new branch] gh/anijain2305/790/head -> origin/gh/anijain2305/790/head 2025-08-14T21:24:05.9007717Z * [new branch] gh/anijain2305/790/orig -> origin/gh/anijain2305/790/orig 2025-08-14T21:24:05.9008440Z * [new branch] gh/anijain2305/792/base -> origin/gh/anijain2305/792/base 2025-08-14T21:24:05.9009811Z * [new branch] gh/anijain2305/792/head -> origin/gh/anijain2305/792/head 2025-08-14T21:24:05.9010348Z * [new branch] gh/anijain2305/792/orig -> origin/gh/anijain2305/792/orig 2025-08-14T21:24:05.9011612Z * [new branch] gh/anijain2305/803/base -> origin/gh/anijain2305/803/base 2025-08-14T21:24:05.9012177Z * [new branch] gh/anijain2305/803/head -> origin/gh/anijain2305/803/head 2025-08-14T21:24:05.9012866Z * [new branch] gh/anijain2305/803/orig -> origin/gh/anijain2305/803/orig 2025-08-14T21:24:05.9013814Z * [new branch] gh/anijain2305/804/base -> origin/gh/anijain2305/804/base 2025-08-14T21:24:05.9014341Z * [new branch] gh/anijain2305/804/head -> origin/gh/anijain2305/804/head 2025-08-14T21:24:05.9015029Z * [new branch] gh/anijain2305/804/orig -> origin/gh/anijain2305/804/orig 2025-08-14T21:24:05.9016538Z * [new branch] gh/anijain2305/805/base -> origin/gh/anijain2305/805/base 2025-08-14T21:24:05.9016997Z * [new branch] gh/anijain2305/805/head -> origin/gh/anijain2305/805/head 2025-08-14T21:24:05.9017861Z * [new branch] gh/anijain2305/805/orig -> origin/gh/anijain2305/805/orig 2025-08-14T21:24:05.9019180Z * [new branch] gh/anijain2305/810/base -> origin/gh/anijain2305/810/base 2025-08-14T21:24:05.9019640Z * [new branch] gh/anijain2305/810/head -> origin/gh/anijain2305/810/head 2025-08-14T21:24:05.9020413Z * [new branch] gh/anijain2305/810/orig -> origin/gh/anijain2305/810/orig 2025-08-14T21:24:05.9021842Z * [new branch] gh/anijain2305/811/base -> origin/gh/anijain2305/811/base 2025-08-14T21:24:05.9022249Z * [new branch] gh/anijain2305/811/head -> origin/gh/anijain2305/811/head 2025-08-14T21:24:05.9022720Z * [new branch] gh/anijain2305/811/orig -> origin/gh/anijain2305/811/orig 2025-08-14T21:24:05.9024080Z * [new branch] gh/anijain2305/812/base -> origin/gh/anijain2305/812/base 2025-08-14T21:24:05.9024492Z * [new branch] gh/anijain2305/812/head -> origin/gh/anijain2305/812/head 2025-08-14T21:24:05.9027421Z * [new branch] gh/anijain2305/812/orig -> origin/gh/anijain2305/812/orig 2025-08-14T21:24:05.9027867Z * [new branch] gh/anijain2305/813/base -> origin/gh/anijain2305/813/base 2025-08-14T21:24:05.9028243Z * [new branch] gh/anijain2305/813/head -> origin/gh/anijain2305/813/head 2025-08-14T21:24:05.9030615Z * [new branch] gh/anijain2305/813/orig -> origin/gh/anijain2305/813/orig 2025-08-14T21:24:05.9036215Z * [new branch] gh/anijain2305/814/base -> origin/gh/anijain2305/814/base 2025-08-14T21:24:05.9038198Z * [new branch] gh/anijain2305/814/head -> origin/gh/anijain2305/814/head 2025-08-14T21:24:05.9038609Z * [new branch] gh/anijain2305/814/orig -> origin/gh/anijain2305/814/orig 2025-08-14T21:24:05.9038999Z * [new branch] gh/anijain2305/815/base -> origin/gh/anijain2305/815/base 2025-08-14T21:24:05.9039382Z * [new branch] gh/anijain2305/815/head -> origin/gh/anijain2305/815/head 2025-08-14T21:24:05.9039763Z * [new branch] gh/anijain2305/815/orig -> origin/gh/anijain2305/815/orig 2025-08-14T21:24:05.9040135Z * [new branch] gh/anijain2305/816/base -> origin/gh/anijain2305/816/base 2025-08-14T21:24:05.9040500Z * [new branch] gh/anijain2305/816/head -> origin/gh/anijain2305/816/head 2025-08-14T21:24:05.9040899Z * [new branch] gh/anijain2305/817/base -> origin/gh/anijain2305/817/base 2025-08-14T21:24:05.9041328Z * [new branch] gh/anijain2305/817/head -> origin/gh/anijain2305/817/head 2025-08-14T21:24:05.9041699Z * [new branch] gh/anijain2305/817/orig -> origin/gh/anijain2305/817/orig 2025-08-14T21:24:05.9042213Z * [new branch] gh/anijain2305/818/base -> origin/gh/anijain2305/818/base 2025-08-14T21:24:05.9042602Z * [new branch] gh/anijain2305/818/head -> origin/gh/anijain2305/818/head 2025-08-14T21:24:05.9042984Z * [new branch] gh/anijain2305/818/orig -> origin/gh/anijain2305/818/orig 2025-08-14T21:24:05.9043402Z * [new branch] gh/anijain2305/819/base -> origin/gh/anijain2305/819/base 2025-08-14T21:24:05.9043851Z * [new branch] gh/anijain2305/819/head -> origin/gh/anijain2305/819/head 2025-08-14T21:24:05.9044240Z * [new branch] gh/anijain2305/819/orig -> origin/gh/anijain2305/819/orig 2025-08-14T21:24:05.9047911Z * [new branch] gh/anijain2305/820/base -> origin/gh/anijain2305/820/base 2025-08-14T21:24:05.9048569Z * [new branch] gh/anijain2305/820/head -> origin/gh/anijain2305/820/head 2025-08-14T21:24:05.9049695Z * [new branch] gh/anijain2305/820/orig -> origin/gh/anijain2305/820/orig 2025-08-14T21:24:05.9050347Z * [new branch] gh/anijain2305/821/base -> origin/gh/anijain2305/821/base 2025-08-14T21:24:05.9051065Z * [new branch] gh/anijain2305/821/head -> origin/gh/anijain2305/821/head 2025-08-14T21:24:05.9051789Z * [new branch] gh/anijain2305/821/orig -> origin/gh/anijain2305/821/orig 2025-08-14T21:24:05.9053146Z * [new branch] gh/anijain2305/822/base -> origin/gh/anijain2305/822/base 2025-08-14T21:24:05.9053717Z * [new branch] gh/anijain2305/822/head -> origin/gh/anijain2305/822/head 2025-08-14T21:24:05.9054597Z * [new branch] gh/anijain2305/822/orig -> origin/gh/anijain2305/822/orig 2025-08-14T21:24:05.9055370Z * [new branch] gh/anijain2305/823/base -> origin/gh/anijain2305/823/base 2025-08-14T21:24:05.9056017Z * [new branch] gh/anijain2305/823/head -> origin/gh/anijain2305/823/head 2025-08-14T21:24:05.9056703Z * [new branch] gh/anijain2305/823/orig -> origin/gh/anijain2305/823/orig 2025-08-14T21:24:05.9058639Z * [new branch] gh/anijain2305/824/base -> origin/gh/anijain2305/824/base 2025-08-14T21:24:05.9059046Z * [new branch] gh/anijain2305/824/head -> origin/gh/anijain2305/824/head 2025-08-14T21:24:05.9059634Z * [new branch] gh/anijain2305/824/orig -> origin/gh/anijain2305/824/orig 2025-08-14T21:24:05.9063917Z * [new branch] gh/anijain2305/825/base -> origin/gh/anijain2305/825/base 2025-08-14T21:24:05.9064267Z * [new branch] gh/anijain2305/825/head -> origin/gh/anijain2305/825/head 2025-08-14T21:24:05.9064712Z * [new branch] gh/anijain2305/825/orig -> origin/gh/anijain2305/825/orig 2025-08-14T21:24:05.9065076Z * [new branch] gh/anijain2305/826/base -> origin/gh/anijain2305/826/base 2025-08-14T21:24:05.9065630Z * [new branch] gh/anijain2305/826/head -> origin/gh/anijain2305/826/head 2025-08-14T21:24:05.9068791Z * [new branch] gh/anijain2305/826/orig -> origin/gh/anijain2305/826/orig 2025-08-14T21:24:05.9069161Z * [new branch] gh/anijain2305/827/base -> origin/gh/anijain2305/827/base 2025-08-14T21:24:05.9069505Z * [new branch] gh/anijain2305/827/head -> origin/gh/anijain2305/827/head 2025-08-14T21:24:05.9069846Z * [new branch] gh/anijain2305/827/orig -> origin/gh/anijain2305/827/orig 2025-08-14T21:24:05.9070200Z * [new branch] gh/anijain2305/828/base -> origin/gh/anijain2305/828/base 2025-08-14T21:24:05.9070588Z * [new branch] gh/anijain2305/828/head -> origin/gh/anijain2305/828/head 2025-08-14T21:24:05.9073646Z * [new branch] gh/anijain2305/828/orig -> origin/gh/anijain2305/828/orig 2025-08-14T21:24:05.9074012Z * [new branch] gh/anijain2305/829/base -> origin/gh/anijain2305/829/base 2025-08-14T21:24:05.9077441Z * [new branch] gh/anijain2305/829/head -> origin/gh/anijain2305/829/head 2025-08-14T21:24:05.9077793Z * [new branch] gh/anijain2305/829/orig -> origin/gh/anijain2305/829/orig 2025-08-14T21:24:05.9078124Z * [new branch] gh/anijain2305/830/base -> origin/gh/anijain2305/830/base 2025-08-14T21:24:05.9078459Z * [new branch] gh/anijain2305/830/head -> origin/gh/anijain2305/830/head 2025-08-14T21:24:05.9078796Z * [new branch] gh/anijain2305/830/orig -> origin/gh/anijain2305/830/orig 2025-08-14T21:24:05.9079132Z * [new branch] gh/anijain2305/831/base -> origin/gh/anijain2305/831/base 2025-08-14T21:24:05.9079468Z * [new branch] gh/anijain2305/831/head -> origin/gh/anijain2305/831/head 2025-08-14T21:24:05.9079806Z * [new branch] gh/anijain2305/831/orig -> origin/gh/anijain2305/831/orig 2025-08-14T21:24:05.9080146Z * [new branch] gh/anijain2305/832/base -> origin/gh/anijain2305/832/base 2025-08-14T21:24:05.9080487Z * [new branch] gh/anijain2305/832/head -> origin/gh/anijain2305/832/head 2025-08-14T21:24:05.9080811Z * [new branch] gh/anijain2305/832/orig -> origin/gh/anijain2305/832/orig 2025-08-14T21:24:05.9081146Z * [new branch] gh/anijain2305/833/base -> origin/gh/anijain2305/833/base 2025-08-14T21:24:05.9081607Z * [new branch] gh/anijain2305/833/head -> origin/gh/anijain2305/833/head 2025-08-14T21:24:05.9082070Z * [new branch] gh/anijain2305/833/orig -> origin/gh/anijain2305/833/orig 2025-08-14T21:24:05.9082582Z * [new branch] gh/anijain2305/834/base -> origin/gh/anijain2305/834/base 2025-08-14T21:24:05.9082966Z * [new branch] gh/anijain2305/834/head -> origin/gh/anijain2305/834/head 2025-08-14T21:24:05.9083708Z * [new branch] gh/anijain2305/834/orig -> origin/gh/anijain2305/834/orig 2025-08-14T21:24:05.9084870Z * [new branch] gh/anijain2305/835/base -> origin/gh/anijain2305/835/base 2025-08-14T21:24:05.9085211Z * [new branch] gh/anijain2305/835/head -> origin/gh/anijain2305/835/head 2025-08-14T21:24:05.9086044Z * [new branch] gh/anijain2305/835/orig -> origin/gh/anijain2305/835/orig 2025-08-14T21:24:05.9086782Z * [new branch] gh/anijain2305/836/base -> origin/gh/anijain2305/836/base 2025-08-14T21:24:05.9087447Z * [new branch] gh/anijain2305/836/head -> origin/gh/anijain2305/836/head 2025-08-14T21:24:05.9088090Z * [new branch] gh/anijain2305/836/orig -> origin/gh/anijain2305/836/orig 2025-08-14T21:24:05.9089364Z * [new branch] gh/anijain2305/837/base -> origin/gh/anijain2305/837/base 2025-08-14T21:24:05.9089710Z * [new branch] gh/anijain2305/837/head -> origin/gh/anijain2305/837/head 2025-08-14T21:24:05.9090361Z * [new branch] gh/anijain2305/837/orig -> origin/gh/anijain2305/837/orig 2025-08-14T21:24:05.9091610Z * [new branch] gh/anijain2305/838/base -> origin/gh/anijain2305/838/base 2025-08-14T21:24:05.9091961Z * [new branch] gh/anijain2305/838/head -> origin/gh/anijain2305/838/head 2025-08-14T21:24:05.9092620Z * [new branch] gh/anijain2305/838/orig -> origin/gh/anijain2305/838/orig 2025-08-14T21:24:05.9094306Z * [new branch] gh/anijain2305/839/base -> origin/gh/anijain2305/839/base 2025-08-14T21:24:05.9094686Z * [new branch] gh/anijain2305/839/head -> origin/gh/anijain2305/839/head 2025-08-14T21:24:05.9095332Z * [new branch] gh/anijain2305/839/orig -> origin/gh/anijain2305/839/orig 2025-08-14T21:24:05.9096704Z * [new branch] gh/anijain2305/840/base -> origin/gh/anijain2305/840/base 2025-08-14T21:24:05.9097071Z * [new branch] gh/anijain2305/840/head -> origin/gh/anijain2305/840/head 2025-08-14T21:24:05.9097898Z * [new branch] gh/anijain2305/840/orig -> origin/gh/anijain2305/840/orig 2025-08-14T21:24:05.9099069Z * [new branch] gh/anijain2305/841/base -> origin/gh/anijain2305/841/base 2025-08-14T21:24:05.9099423Z * [new branch] gh/anijain2305/841/head -> origin/gh/anijain2305/841/head 2025-08-14T21:24:05.9100385Z * [new branch] gh/anijain2305/841/orig -> origin/gh/anijain2305/841/orig 2025-08-14T21:24:05.9101948Z * [new branch] gh/anijain2305/842/base -> origin/gh/anijain2305/842/base 2025-08-14T21:24:05.9102318Z * [new branch] gh/anijain2305/842/head -> origin/gh/anijain2305/842/head 2025-08-14T21:24:05.9102688Z * [new branch] gh/anijain2305/842/orig -> origin/gh/anijain2305/842/orig 2025-08-14T21:24:05.9103980Z * [new branch] gh/anijain2305/843/base -> origin/gh/anijain2305/843/base 2025-08-14T21:24:05.9104342Z * [new branch] gh/anijain2305/843/head -> origin/gh/anijain2305/843/head 2025-08-14T21:24:05.9109401Z * [new branch] gh/anijain2305/843/orig -> origin/gh/anijain2305/843/orig 2025-08-14T21:24:05.9109809Z * [new branch] gh/anijain2305/844/base -> origin/gh/anijain2305/844/base 2025-08-14T21:24:05.9110171Z * [new branch] gh/anijain2305/844/head -> origin/gh/anijain2305/844/head 2025-08-14T21:24:05.9110535Z * [new branch] gh/anijain2305/844/orig -> origin/gh/anijain2305/844/orig 2025-08-14T21:24:05.9110904Z * [new branch] gh/anijain2305/845/base -> origin/gh/anijain2305/845/base 2025-08-14T21:24:05.9111276Z * [new branch] gh/anijain2305/845/head -> origin/gh/anijain2305/845/head 2025-08-14T21:24:05.9111704Z * [new branch] gh/anijain2305/845/orig -> origin/gh/anijain2305/845/orig 2025-08-14T21:24:05.9114475Z * [new branch] gh/anijain2305/846/base -> origin/gh/anijain2305/846/base 2025-08-14T21:24:05.9114839Z * [new branch] gh/anijain2305/846/head -> origin/gh/anijain2305/846/head 2025-08-14T21:24:05.9115200Z * [new branch] gh/anijain2305/846/orig -> origin/gh/anijain2305/846/orig 2025-08-14T21:24:05.9115548Z * [new branch] gh/anijain2305/847/base -> origin/gh/anijain2305/847/base 2025-08-14T21:24:05.9115905Z * [new branch] gh/anijain2305/847/head -> origin/gh/anijain2305/847/head 2025-08-14T21:24:05.9116259Z * [new branch] gh/anijain2305/847/orig -> origin/gh/anijain2305/847/orig 2025-08-14T21:24:05.9116613Z * [new branch] gh/anijain2305/848/base -> origin/gh/anijain2305/848/base 2025-08-14T21:24:05.9117030Z * [new branch] gh/anijain2305/848/head -> origin/gh/anijain2305/848/head 2025-08-14T21:24:05.9117390Z * [new branch] gh/anijain2305/848/orig -> origin/gh/anijain2305/848/orig 2025-08-14T21:24:05.9118719Z * [new branch] gh/anjali411/216/base -> origin/gh/anjali411/216/base 2025-08-14T21:24:05.9119088Z * [new branch] gh/anjali411/216/head -> origin/gh/anjali411/216/head 2025-08-14T21:24:05.9119679Z * [new branch] gh/anjali411/216/orig -> origin/gh/anjali411/216/orig 2025-08-14T21:24:05.9121335Z * [new branch] gh/ankitageorge/10/base -> origin/gh/ankitageorge/10/base 2025-08-14T21:24:05.9121720Z * [new branch] gh/ankitageorge/10/head -> origin/gh/ankitageorge/10/head 2025-08-14T21:24:05.9122530Z * [new branch] gh/ankitageorge/10/orig -> origin/gh/ankitageorge/10/orig 2025-08-14T21:24:05.9123505Z * [new branch] gh/ankitageorge/12/base -> origin/gh/ankitageorge/12/base 2025-08-14T21:24:05.9124297Z * [new branch] gh/ankitageorge/12/head -> origin/gh/ankitageorge/12/head 2025-08-14T21:24:05.9124988Z * [new branch] gh/ankitageorge/12/orig -> origin/gh/ankitageorge/12/orig 2025-08-14T21:24:05.9126216Z * [new branch] gh/ankitageorge/13/base -> origin/gh/ankitageorge/13/base 2025-08-14T21:24:05.9126624Z * [new branch] gh/ankitageorge/13/head -> origin/gh/ankitageorge/13/head 2025-08-14T21:24:05.9127377Z * [new branch] gh/ankitageorge/13/orig -> origin/gh/ankitageorge/13/orig 2025-08-14T21:24:05.9128751Z * [new branch] gh/ankitageorge/14/base -> origin/gh/ankitageorge/14/base 2025-08-14T21:24:05.9129144Z * [new branch] gh/ankitageorge/14/head -> origin/gh/ankitageorge/14/head 2025-08-14T21:24:05.9130328Z * [new branch] gh/ankitageorge/14/orig -> origin/gh/ankitageorge/14/orig 2025-08-14T21:24:05.9131124Z * [new branch] gh/ankitageorge/15/base -> origin/gh/ankitageorge/15/base 2025-08-14T21:24:05.9131833Z * [new branch] gh/ankitageorge/15/head -> origin/gh/ankitageorge/15/head 2025-08-14T21:24:05.9132616Z * [new branch] gh/ankitageorge/15/orig -> origin/gh/ankitageorge/15/orig 2025-08-14T21:24:05.9133898Z * [new branch] gh/ankitageorge/16/base -> origin/gh/ankitageorge/16/base 2025-08-14T21:24:05.9134374Z * [new branch] gh/ankitageorge/16/head -> origin/gh/ankitageorge/16/head 2025-08-14T21:24:05.9135147Z * [new branch] gh/ankitageorge/16/orig -> origin/gh/ankitageorge/16/orig 2025-08-14T21:24:05.9139915Z * [new branch] gh/ankitageorge/17/base -> origin/gh/ankitageorge/17/base 2025-08-14T21:24:05.9140307Z * [new branch] gh/ankitageorge/17/head -> origin/gh/ankitageorge/17/head 2025-08-14T21:24:05.9140710Z * [new branch] gh/ankitageorge/17/orig -> origin/gh/ankitageorge/17/orig 2025-08-14T21:24:05.9141147Z * [new branch] gh/ankitageorge/18/base -> origin/gh/ankitageorge/18/base 2025-08-14T21:24:05.9141518Z * [new branch] gh/ankitageorge/18/head -> origin/gh/ankitageorge/18/head 2025-08-14T21:24:05.9142011Z * [new branch] gh/ankitageorge/18/orig -> origin/gh/ankitageorge/18/orig 2025-08-14T21:24:05.9142387Z * [new branch] gh/ankitageorge/19/base -> origin/gh/ankitageorge/19/base 2025-08-14T21:24:05.9142750Z * [new branch] gh/ankitageorge/19/head -> origin/gh/ankitageorge/19/head 2025-08-14T21:24:05.9143462Z * [new branch] gh/ankitageorge/19/orig -> origin/gh/ankitageorge/19/orig 2025-08-14T21:24:05.9146465Z * [new branch] gh/ankitageorge/20/base -> origin/gh/ankitageorge/20/base 2025-08-14T21:24:05.9146919Z * [new branch] gh/ankitageorge/20/head -> origin/gh/ankitageorge/20/head 2025-08-14T21:24:05.9147839Z * [new branch] gh/ankitageorge/20/orig -> origin/gh/ankitageorge/20/orig 2025-08-14T21:24:05.9148478Z * [new branch] gh/ankitageorge/21/base -> origin/gh/ankitageorge/21/base 2025-08-14T21:24:05.9149288Z * [new branch] gh/ankitageorge/21/head -> origin/gh/ankitageorge/21/head 2025-08-14T21:24:05.9149847Z * [new branch] gh/ankitageorge/21/orig -> origin/gh/ankitageorge/21/orig 2025-08-14T21:24:05.9152598Z * [new branch] gh/anshul-si/1/base -> origin/gh/anshul-si/1/base 2025-08-14T21:24:05.9153018Z * [new branch] gh/anshul-si/1/head -> origin/gh/anshul-si/1/head 2025-08-14T21:24:05.9153535Z * [new branch] gh/anshul-si/10/base -> origin/gh/anshul-si/10/base 2025-08-14T21:24:05.9153936Z * [new branch] gh/anshul-si/10/head -> origin/gh/anshul-si/10/head 2025-08-14T21:24:05.9154667Z * [new branch] gh/anshul-si/10/orig -> origin/gh/anshul-si/10/orig 2025-08-14T21:24:05.9158229Z * [new branch] gh/anshul-si/11/base -> origin/gh/anshul-si/11/base 2025-08-14T21:24:05.9158653Z * [new branch] gh/anshul-si/11/head -> origin/gh/anshul-si/11/head 2025-08-14T21:24:05.9159014Z * [new branch] gh/anshul-si/11/orig -> origin/gh/anshul-si/11/orig 2025-08-14T21:24:05.9159377Z * [new branch] gh/anshul-si/12/base -> origin/gh/anshul-si/12/base 2025-08-14T21:24:05.9159741Z * [new branch] gh/anshul-si/12/head -> origin/gh/anshul-si/12/head 2025-08-14T21:24:05.9160097Z * [new branch] gh/anshul-si/12/orig -> origin/gh/anshul-si/12/orig 2025-08-14T21:24:05.9161045Z * [new branch] gh/anshul-si/13/base -> origin/gh/anshul-si/13/base 2025-08-14T21:24:05.9161587Z * [new branch] gh/anshul-si/13/head -> origin/gh/anshul-si/13/head 2025-08-14T21:24:05.9162331Z * [new branch] gh/anshul-si/13/orig -> origin/gh/anshul-si/13/orig 2025-08-14T21:24:05.9163370Z * [new branch] gh/anshul-si/14/base -> origin/gh/anshul-si/14/base 2025-08-14T21:24:05.9163926Z * [new branch] gh/anshul-si/14/head -> origin/gh/anshul-si/14/head 2025-08-14T21:24:05.9164640Z * [new branch] gh/anshul-si/14/orig -> origin/gh/anshul-si/14/orig 2025-08-14T21:24:05.9165807Z * [new branch] gh/anshul-si/15/base -> origin/gh/anshul-si/15/base 2025-08-14T21:24:05.9166180Z * [new branch] gh/anshul-si/15/head -> origin/gh/anshul-si/15/head 2025-08-14T21:24:05.9166958Z * [new branch] gh/anshul-si/15/orig -> origin/gh/anshul-si/15/orig 2025-08-14T21:24:05.9171093Z * [new branch] gh/anshul-si/16/base -> origin/gh/anshul-si/16/base 2025-08-14T21:24:05.9171518Z * [new branch] gh/anshul-si/16/head -> origin/gh/anshul-si/16/head 2025-08-14T21:24:05.9172095Z * [new branch] gh/anshul-si/16/orig -> origin/gh/anshul-si/16/orig 2025-08-14T21:24:05.9172447Z * [new branch] gh/anshul-si/17/base -> origin/gh/anshul-si/17/base 2025-08-14T21:24:05.9172799Z * [new branch] gh/anshul-si/17/head -> origin/gh/anshul-si/17/head 2025-08-14T21:24:05.9173149Z * [new branch] gh/anshul-si/17/orig -> origin/gh/anshul-si/17/orig 2025-08-14T21:24:05.9173491Z * [new branch] gh/anshul-si/18/base -> origin/gh/anshul-si/18/base 2025-08-14T21:24:05.9174145Z * [new branch] gh/anshul-si/18/head -> origin/gh/anshul-si/18/head 2025-08-14T21:24:05.9174700Z * [new branch] gh/anshul-si/18/orig -> origin/gh/anshul-si/18/orig 2025-08-14T21:24:05.9175544Z * [new branch] gh/anshul-si/19/base -> origin/gh/anshul-si/19/base 2025-08-14T21:24:05.9176413Z * [new branch] gh/anshul-si/19/head -> origin/gh/anshul-si/19/head 2025-08-14T21:24:05.9177038Z * [new branch] gh/anshul-si/19/orig -> origin/gh/anshul-si/19/orig 2025-08-14T21:24:05.9181356Z * [new branch] gh/anshul-si/2/base -> origin/gh/anshul-si/2/base 2025-08-14T21:24:05.9181760Z * [new branch] gh/anshul-si/2/head -> origin/gh/anshul-si/2/head 2025-08-14T21:24:05.9187514Z * [new branch] gh/anshul-si/20/base -> origin/gh/anshul-si/20/base 2025-08-14T21:24:05.9192134Z * [new branch] gh/anshul-si/20/head -> origin/gh/anshul-si/20/head 2025-08-14T21:24:05.9196329Z * [new branch] gh/anshul-si/20/orig -> origin/gh/anshul-si/20/orig 2025-08-14T21:24:05.9198174Z * [new branch] gh/anshul-si/21/base -> origin/gh/anshul-si/21/base 2025-08-14T21:24:05.9198950Z * [new branch] gh/anshul-si/21/head -> origin/gh/anshul-si/21/head 2025-08-14T21:24:05.9201987Z * [new branch] gh/anshul-si/21/orig -> origin/gh/anshul-si/21/orig 2025-08-14T21:24:05.9202462Z * [new branch] gh/anshul-si/22/base -> origin/gh/anshul-si/22/base 2025-08-14T21:24:05.9207619Z * [new branch] gh/anshul-si/22/head -> origin/gh/anshul-si/22/head 2025-08-14T21:24:05.9211946Z * [new branch] gh/anshul-si/22/orig -> origin/gh/anshul-si/22/orig 2025-08-14T21:24:05.9212382Z * [new branch] gh/anshul-si/23/base -> origin/gh/anshul-si/23/base 2025-08-14T21:24:05.9212742Z * [new branch] gh/anshul-si/23/head -> origin/gh/anshul-si/23/head 2025-08-14T21:24:05.9213086Z * [new branch] gh/anshul-si/23/orig -> origin/gh/anshul-si/23/orig 2025-08-14T21:24:05.9213431Z * [new branch] gh/anshul-si/24/base -> origin/gh/anshul-si/24/base 2025-08-14T21:24:05.9213771Z * [new branch] gh/anshul-si/24/head -> origin/gh/anshul-si/24/head 2025-08-14T21:24:05.9214131Z * [new branch] gh/anshul-si/24/orig -> origin/gh/anshul-si/24/orig 2025-08-14T21:24:05.9214496Z * [new branch] gh/anshul-si/25/base -> origin/gh/anshul-si/25/base 2025-08-14T21:24:05.9214836Z * [new branch] gh/anshul-si/25/head -> origin/gh/anshul-si/25/head 2025-08-14T21:24:05.9215173Z * [new branch] gh/anshul-si/25/orig -> origin/gh/anshul-si/25/orig 2025-08-14T21:24:05.9215507Z * [new branch] gh/anshul-si/26/base -> origin/gh/anshul-si/26/base 2025-08-14T21:24:05.9215847Z * [new branch] gh/anshul-si/26/head -> origin/gh/anshul-si/26/head 2025-08-14T21:24:05.9216190Z * [new branch] gh/anshul-si/26/orig -> origin/gh/anshul-si/26/orig 2025-08-14T21:24:05.9216531Z * [new branch] gh/anshul-si/27/base -> origin/gh/anshul-si/27/base 2025-08-14T21:24:05.9216863Z * [new branch] gh/anshul-si/27/head -> origin/gh/anshul-si/27/head 2025-08-14T21:24:05.9217210Z * [new branch] gh/anshul-si/27/orig -> origin/gh/anshul-si/27/orig 2025-08-14T21:24:05.9217741Z * [new branch] gh/anshul-si/3/base -> origin/gh/anshul-si/3/base 2025-08-14T21:24:05.9218098Z * [new branch] gh/anshul-si/3/head -> origin/gh/anshul-si/3/head 2025-08-14T21:24:05.9218444Z * [new branch] gh/anshul-si/4/base -> origin/gh/anshul-si/4/base 2025-08-14T21:24:05.9218791Z * [new branch] gh/anshul-si/4/head -> origin/gh/anshul-si/4/head 2025-08-14T21:24:05.9219128Z * [new branch] gh/anshul-si/5/base -> origin/gh/anshul-si/5/base 2025-08-14T21:24:05.9219456Z * [new branch] gh/anshul-si/5/head -> origin/gh/anshul-si/5/head 2025-08-14T21:24:05.9220017Z * [new branch] gh/anshul-si/6/base -> origin/gh/anshul-si/6/base 2025-08-14T21:24:05.9220366Z * [new branch] gh/anshul-si/6/head -> origin/gh/anshul-si/6/head 2025-08-14T21:24:05.9220769Z * [new branch] gh/anshul-si/6/orig -> origin/gh/anshul-si/6/orig 2025-08-14T21:24:05.9221108Z * [new branch] gh/anshul-si/7/base -> origin/gh/anshul-si/7/base 2025-08-14T21:24:05.9221455Z * [new branch] gh/anshul-si/7/head -> origin/gh/anshul-si/7/head 2025-08-14T21:24:05.9221800Z * [new branch] gh/anshul-si/7/orig -> origin/gh/anshul-si/7/orig 2025-08-14T21:24:05.9222140Z * [new branch] gh/anshul-si/8/base -> origin/gh/anshul-si/8/base 2025-08-14T21:24:05.9222478Z * [new branch] gh/anshul-si/8/head -> origin/gh/anshul-si/8/head 2025-08-14T21:24:05.9222829Z * [new branch] gh/anshul-si/8/orig -> origin/gh/anshul-si/8/orig 2025-08-14T21:24:05.9223173Z * [new branch] gh/anshul-si/9/base -> origin/gh/anshul-si/9/base 2025-08-14T21:24:05.9223514Z * [new branch] gh/anshul-si/9/head -> origin/gh/anshul-si/9/head 2025-08-14T21:24:05.9223856Z * [new branch] gh/anshul-si/9/orig -> origin/gh/anshul-si/9/orig 2025-08-14T21:24:05.9224209Z * [new branch] gh/aorenste/132/base -> origin/gh/aorenste/132/base 2025-08-14T21:24:05.9224563Z * [new branch] gh/aorenste/132/head -> origin/gh/aorenste/132/head 2025-08-14T21:24:05.9224906Z * [new branch] gh/aorenste/235/base -> origin/gh/aorenste/235/base 2025-08-14T21:24:05.9225255Z * [new branch] gh/aorenste/235/head -> origin/gh/aorenste/235/head 2025-08-14T21:24:05.9225603Z * [new branch] gh/aorenste/235/orig -> origin/gh/aorenste/235/orig 2025-08-14T21:24:05.9225951Z * [new branch] gh/aorenste/236/base -> origin/gh/aorenste/236/base 2025-08-14T21:24:05.9226291Z * [new branch] gh/aorenste/236/head -> origin/gh/aorenste/236/head 2025-08-14T21:24:05.9226641Z * [new branch] gh/aorenste/236/orig -> origin/gh/aorenste/236/orig 2025-08-14T21:24:05.9227000Z * [new branch] gh/aorenste/237/base -> origin/gh/aorenste/237/base 2025-08-14T21:24:05.9231540Z * [new branch] gh/aorenste/237/head -> origin/gh/aorenste/237/head 2025-08-14T21:24:05.9236261Z * [new branch] gh/aorenste/237/orig -> origin/gh/aorenste/237/orig 2025-08-14T21:24:05.9239003Z * [new branch] gh/aorenste/238/base -> origin/gh/aorenste/238/base 2025-08-14T21:24:05.9239520Z * [new branch] gh/aorenste/238/head -> origin/gh/aorenste/238/head 2025-08-14T21:24:05.9245252Z * [new branch] gh/aorenste/238/orig -> origin/gh/aorenste/238/orig 2025-08-14T21:24:05.9248964Z * [new branch] gh/bdhirsh/650/base -> origin/gh/bdhirsh/650/base 2025-08-14T21:24:05.9249539Z * [new branch] gh/bdhirsh/650/head -> origin/gh/bdhirsh/650/head 2025-08-14T21:24:05.9249985Z * [new branch] gh/bdhirsh/650/orig -> origin/gh/bdhirsh/650/orig 2025-08-14T21:24:05.9255237Z * [new branch] gh/bdhirsh/656/base -> origin/gh/bdhirsh/656/base 2025-08-14T21:24:05.9255835Z * [new branch] gh/bdhirsh/656/head -> origin/gh/bdhirsh/656/head 2025-08-14T21:24:05.9256325Z * [new branch] gh/bdhirsh/657/base -> origin/gh/bdhirsh/657/base 2025-08-14T21:24:05.9256680Z * [new branch] gh/bdhirsh/657/head -> origin/gh/bdhirsh/657/head 2025-08-14T21:24:05.9257036Z * [new branch] gh/bdhirsh/659/base -> origin/gh/bdhirsh/659/base 2025-08-14T21:24:05.9257379Z * [new branch] gh/bdhirsh/659/head -> origin/gh/bdhirsh/659/head 2025-08-14T21:24:05.9257722Z * [new branch] gh/bdhirsh/659/orig -> origin/gh/bdhirsh/659/orig 2025-08-14T21:24:05.9258064Z * [new branch] gh/bdhirsh/663/base -> origin/gh/bdhirsh/663/base 2025-08-14T21:24:05.9258485Z * [new branch] gh/bdhirsh/663/head -> origin/gh/bdhirsh/663/head 2025-08-14T21:24:05.9258872Z * [new branch] gh/bdhirsh/663/orig -> origin/gh/bdhirsh/663/orig 2025-08-14T21:24:05.9259226Z * [new branch] gh/bdhirsh/665/base -> origin/gh/bdhirsh/665/base 2025-08-14T21:24:05.9259580Z * [new branch] gh/bdhirsh/665/head -> origin/gh/bdhirsh/665/head 2025-08-14T21:24:05.9260088Z * [new branch] gh/bdhirsh/665/orig -> origin/gh/bdhirsh/665/orig 2025-08-14T21:24:05.9260443Z * [new branch] gh/bdhirsh/666/base -> origin/gh/bdhirsh/666/base 2025-08-14T21:24:05.9260793Z * [new branch] gh/bdhirsh/666/head -> origin/gh/bdhirsh/666/head 2025-08-14T21:24:05.9261145Z * [new branch] gh/bdhirsh/666/orig -> origin/gh/bdhirsh/666/orig 2025-08-14T21:24:05.9261526Z * [new branch] gh/benjaminglass1/79/base -> origin/gh/benjaminglass1/79/base 2025-08-14T21:24:05.9261943Z * [new branch] gh/benjaminglass1/79/head -> origin/gh/benjaminglass1/79/head 2025-08-14T21:24:05.9262335Z * [new branch] gh/benjaminglass1/79/orig -> origin/gh/benjaminglass1/79/orig 2025-08-14T21:24:05.9262710Z * [new branch] gh/benjaminglass1/86/base -> origin/gh/benjaminglass1/86/base 2025-08-14T21:24:05.9263096Z * [new branch] gh/benjaminglass1/86/head -> origin/gh/benjaminglass1/86/head 2025-08-14T21:24:05.9263480Z * [new branch] gh/benjaminglass1/86/orig -> origin/gh/benjaminglass1/86/orig 2025-08-14T21:24:05.9263866Z * [new branch] gh/benjaminglass1/89/base -> origin/gh/benjaminglass1/89/base 2025-08-14T21:24:05.9264239Z * [new branch] gh/benjaminglass1/89/head -> origin/gh/benjaminglass1/89/head 2025-08-14T21:24:05.9264619Z * [new branch] gh/benjaminglass1/89/orig -> origin/gh/benjaminglass1/89/orig 2025-08-14T21:24:05.9265000Z * [new branch] gh/benjaminglass1/91/base -> origin/gh/benjaminglass1/91/base 2025-08-14T21:24:05.9265394Z * [new branch] gh/benjaminglass1/91/head -> origin/gh/benjaminglass1/91/head 2025-08-14T21:24:05.9265768Z * [new branch] gh/benjaminglass1/91/orig -> origin/gh/benjaminglass1/91/orig 2025-08-14T21:24:05.9266147Z * [new branch] gh/benjaminglass1/93/base -> origin/gh/benjaminglass1/93/base 2025-08-14T21:24:05.9266547Z * [new branch] gh/benjaminglass1/93/head -> origin/gh/benjaminglass1/93/head 2025-08-14T21:24:05.9266937Z * [new branch] gh/benjaminglass1/93/orig -> origin/gh/benjaminglass1/93/orig 2025-08-14T21:24:05.9267312Z * [new branch] gh/benjaminglass1/94/base -> origin/gh/benjaminglass1/94/base 2025-08-14T21:24:05.9267693Z * [new branch] gh/benjaminglass1/94/head -> origin/gh/benjaminglass1/94/head 2025-08-14T21:24:05.9268080Z * [new branch] gh/benjaminglass1/94/orig -> origin/gh/benjaminglass1/94/orig 2025-08-14T21:24:05.9268517Z * [new branch] gh/benjaminglass1/95/base -> origin/gh/benjaminglass1/95/base 2025-08-14T21:24:05.9268892Z * [new branch] gh/benjaminglass1/95/head -> origin/gh/benjaminglass1/95/head 2025-08-14T21:24:05.9269280Z * [new branch] gh/benjaminglass1/95/orig -> origin/gh/benjaminglass1/95/orig 2025-08-14T21:24:05.9269671Z * [new branch] gh/benjaminglass1/96/base -> origin/gh/benjaminglass1/96/base 2025-08-14T21:24:05.9270023Z * [new branch] gh/benjaminglass1/96/head -> origin/gh/benjaminglass1/96/head 2025-08-14T21:24:05.9270404Z * [new branch] gh/benjaminglass1/96/orig -> origin/gh/benjaminglass1/96/orig 2025-08-14T21:24:05.9270792Z * [new branch] gh/benjaminglass1/97/base -> origin/gh/benjaminglass1/97/base 2025-08-14T21:24:05.9271177Z * [new branch] gh/benjaminglass1/97/head -> origin/gh/benjaminglass1/97/head 2025-08-14T21:24:05.9271567Z * [new branch] gh/benjaminglass1/97/orig -> origin/gh/benjaminglass1/97/orig 2025-08-14T21:24:05.9271923Z * [new branch] gh/benjaminglass1/98/base -> origin/gh/benjaminglass1/98/base 2025-08-14T21:24:05.9272300Z * [new branch] gh/benjaminglass1/98/head -> origin/gh/benjaminglass1/98/head 2025-08-14T21:24:05.9272691Z * [new branch] gh/benjaminglass1/98/orig -> origin/gh/benjaminglass1/98/orig 2025-08-14T21:24:05.9273035Z * [new branch] gh/bobrenjc93/478/base -> origin/gh/bobrenjc93/478/base 2025-08-14T21:24:05.9273378Z * [new branch] gh/bobrenjc93/478/head -> origin/gh/bobrenjc93/478/head 2025-08-14T21:24:05.9273712Z * [new branch] gh/bobrenjc93/478/orig -> origin/gh/bobrenjc93/478/orig 2025-08-14T21:24:05.9274745Z * [new branch] gh/bobrenjc93/514/base -> origin/gh/bobrenjc93/514/base 2025-08-14T21:24:05.9275081Z * [new branch] gh/bobrenjc93/514/head -> origin/gh/bobrenjc93/514/head 2025-08-14T21:24:05.9275434Z * [new branch] gh/bobrenjc93/514/orig -> origin/gh/bobrenjc93/514/orig 2025-08-14T21:24:05.9275783Z * [new branch] gh/bobrenjc93/521/base -> origin/gh/bobrenjc93/521/base 2025-08-14T21:24:05.9281015Z * [new branch] gh/bobrenjc93/521/head -> origin/gh/bobrenjc93/521/head 2025-08-14T21:24:05.9283104Z * [new branch] gh/bobrenjc93/521/orig -> origin/gh/bobrenjc93/521/orig 2025-08-14T21:24:05.9283598Z * [new branch] gh/bobrenjc93/522/base -> origin/gh/bobrenjc93/522/base 2025-08-14T21:24:05.9288506Z * [new branch] gh/bobrenjc93/522/head -> origin/gh/bobrenjc93/522/head 2025-08-14T21:24:05.9290967Z * [new branch] gh/bobrenjc93/522/orig -> origin/gh/bobrenjc93/522/orig 2025-08-14T21:24:05.9291489Z * [new branch] gh/bobrenjc93/525/base -> origin/gh/bobrenjc93/525/base 2025-08-14T21:24:05.9296789Z * [new branch] gh/bobrenjc93/525/head -> origin/gh/bobrenjc93/525/head 2025-08-14T21:24:05.9297350Z * [new branch] gh/bobrenjc93/525/orig -> origin/gh/bobrenjc93/525/orig 2025-08-14T21:24:05.9297849Z * [new branch] gh/bobrenjc93/526/base -> origin/gh/bobrenjc93/526/base 2025-08-14T21:24:05.9298766Z * [new branch] gh/bobrenjc93/526/head -> origin/gh/bobrenjc93/526/head 2025-08-14T21:24:05.9299182Z * [new branch] gh/bobrenjc93/526/orig -> origin/gh/bobrenjc93/526/orig 2025-08-14T21:24:05.9299551Z * [new branch] gh/bobrenjc93/527/base -> origin/gh/bobrenjc93/527/base 2025-08-14T21:24:05.9300121Z * [new branch] gh/bobrenjc93/527/head -> origin/gh/bobrenjc93/527/head 2025-08-14T21:24:05.9300461Z * [new branch] gh/bobrenjc93/527/orig -> origin/gh/bobrenjc93/527/orig 2025-08-14T21:24:05.9300824Z * [new branch] gh/bobrenjc93/528/base -> origin/gh/bobrenjc93/528/base 2025-08-14T21:24:05.9301572Z * [new branch] gh/bobrenjc93/528/head -> origin/gh/bobrenjc93/528/head 2025-08-14T21:24:05.9301950Z * [new branch] gh/bobrenjc93/528/orig -> origin/gh/bobrenjc93/528/orig 2025-08-14T21:24:05.9302311Z * [new branch] gh/bobrenjc93/529/base -> origin/gh/bobrenjc93/529/base 2025-08-14T21:24:05.9302681Z * [new branch] gh/bobrenjc93/529/head -> origin/gh/bobrenjc93/529/head 2025-08-14T21:24:05.9303056Z * [new branch] gh/bobrenjc93/529/orig -> origin/gh/bobrenjc93/529/orig 2025-08-14T21:24:05.9303426Z * [new branch] gh/bobrenjc93/534/base -> origin/gh/bobrenjc93/534/base 2025-08-14T21:24:05.9303788Z * [new branch] gh/bobrenjc93/534/head -> origin/gh/bobrenjc93/534/head 2025-08-14T21:24:05.9304237Z * [new branch] gh/bobrenjc93/534/orig -> origin/gh/bobrenjc93/534/orig 2025-08-14T21:24:05.9304663Z * [new branch] gh/bobrenjc93/535/base -> origin/gh/bobrenjc93/535/base 2025-08-14T21:24:05.9305045Z * [new branch] gh/bobrenjc93/535/head -> origin/gh/bobrenjc93/535/head 2025-08-14T21:24:05.9305413Z * [new branch] gh/bobrenjc93/535/orig -> origin/gh/bobrenjc93/535/orig 2025-08-14T21:24:05.9305793Z * [new branch] gh/bobrenjc93/536/base -> origin/gh/bobrenjc93/536/base 2025-08-14T21:24:05.9306170Z * [new branch] gh/bobrenjc93/536/head -> origin/gh/bobrenjc93/536/head 2025-08-14T21:24:05.9306545Z * [new branch] gh/bobrenjc93/536/orig -> origin/gh/bobrenjc93/536/orig 2025-08-14T21:24:05.9306919Z * [new branch] gh/bobrenjc93/537/base -> origin/gh/bobrenjc93/537/base 2025-08-14T21:24:05.9307294Z * [new branch] gh/bobrenjc93/537/head -> origin/gh/bobrenjc93/537/head 2025-08-14T21:24:05.9307667Z * [new branch] gh/bobrenjc93/537/orig -> origin/gh/bobrenjc93/537/orig 2025-08-14T21:24:05.9308039Z * [new branch] gh/bobrenjc93/538/base -> origin/gh/bobrenjc93/538/base 2025-08-14T21:24:05.9308422Z * [new branch] gh/bobrenjc93/538/head -> origin/gh/bobrenjc93/538/head 2025-08-14T21:24:05.9308801Z * [new branch] gh/bobrenjc93/538/orig -> origin/gh/bobrenjc93/538/orig 2025-08-14T21:24:05.9309178Z * [new branch] gh/bobrenjc93/539/base -> origin/gh/bobrenjc93/539/base 2025-08-14T21:24:05.9309544Z * [new branch] gh/bobrenjc93/539/head -> origin/gh/bobrenjc93/539/head 2025-08-14T21:24:05.9309950Z * [new branch] gh/bobrenjc93/539/orig -> origin/gh/bobrenjc93/539/orig 2025-08-14T21:24:05.9310320Z * [new branch] gh/bobrenjc93/540/base -> origin/gh/bobrenjc93/540/base 2025-08-14T21:24:05.9310696Z * [new branch] gh/bobrenjc93/540/head -> origin/gh/bobrenjc93/540/head 2025-08-14T21:24:05.9311061Z * [new branch] gh/bobrenjc93/540/orig -> origin/gh/bobrenjc93/540/orig 2025-08-14T21:24:05.9311446Z * [new branch] gh/bobrenjc93/541/base -> origin/gh/bobrenjc93/541/base 2025-08-14T21:24:05.9311824Z * [new branch] gh/bobrenjc93/541/head -> origin/gh/bobrenjc93/541/head 2025-08-14T21:24:05.9312341Z * [new branch] gh/bobrenjc93/541/orig -> origin/gh/bobrenjc93/541/orig 2025-08-14T21:24:05.9315800Z * [new branch] gh/bobrenjc93/542/base -> origin/gh/bobrenjc93/542/base 2025-08-14T21:24:05.9316335Z * [new branch] gh/bobrenjc93/542/head -> origin/gh/bobrenjc93/542/head 2025-08-14T21:24:05.9319436Z * [new branch] gh/bobrenjc93/542/orig -> origin/gh/bobrenjc93/542/orig 2025-08-14T21:24:05.9319929Z * [new branch] gh/bobrenjc93/543/base -> origin/gh/bobrenjc93/543/base 2025-08-14T21:24:05.9325744Z * [new branch] gh/bobrenjc93/543/head -> origin/gh/bobrenjc93/543/head 2025-08-14T21:24:05.9330074Z * [new branch] gh/bobrenjc93/543/orig -> origin/gh/bobrenjc93/543/orig 2025-08-14T21:24:05.9330679Z * [new branch] gh/bobrenjc93/544/base -> origin/gh/bobrenjc93/544/base 2025-08-14T21:24:05.9331059Z * [new branch] gh/bobrenjc93/544/head -> origin/gh/bobrenjc93/544/head 2025-08-14T21:24:05.9331428Z * [new branch] gh/bobrenjc93/544/orig -> origin/gh/bobrenjc93/544/orig 2025-08-14T21:24:05.9331590Z * [new branch] gh/bobrenjc93/545/base -> origin/gh/bobrenjc93/545/base 2025-08-14T21:24:05.9331734Z * [new branch] gh/bobrenjc93/545/head -> origin/gh/bobrenjc93/545/head 2025-08-14T21:24:05.9331890Z * [new branch] gh/bobrenjc93/545/orig -> origin/gh/bobrenjc93/545/orig 2025-08-14T21:24:05.9332035Z * [new branch] gh/bobrenjc93/546/base -> origin/gh/bobrenjc93/546/base 2025-08-14T21:24:05.9332182Z * [new branch] gh/bobrenjc93/546/head -> origin/gh/bobrenjc93/546/head 2025-08-14T21:24:05.9332392Z * [new branch] gh/bobrenjc93/546/orig -> origin/gh/bobrenjc93/546/orig 2025-08-14T21:24:05.9332539Z * [new branch] gh/bobrenjc93/547/base -> origin/gh/bobrenjc93/547/base 2025-08-14T21:24:05.9332693Z * [new branch] gh/bobrenjc93/547/head -> origin/gh/bobrenjc93/547/head 2025-08-14T21:24:05.9332838Z * [new branch] gh/bobrenjc93/547/orig -> origin/gh/bobrenjc93/547/orig 2025-08-14T21:24:05.9332980Z * [new branch] gh/bobrenjc93/548/base -> origin/gh/bobrenjc93/548/base 2025-08-14T21:24:05.9333132Z * [new branch] gh/bobrenjc93/548/head -> origin/gh/bobrenjc93/548/head 2025-08-14T21:24:05.9333276Z * [new branch] gh/bobrenjc93/548/orig -> origin/gh/bobrenjc93/548/orig 2025-08-14T21:24:05.9333428Z * [new branch] gh/bobrenjc93/549/base -> origin/gh/bobrenjc93/549/base 2025-08-14T21:24:05.9333575Z * [new branch] gh/bobrenjc93/549/head -> origin/gh/bobrenjc93/549/head 2025-08-14T21:24:05.9333723Z * [new branch] gh/bobrenjc93/549/orig -> origin/gh/bobrenjc93/549/orig 2025-08-14T21:24:05.9333904Z * [new branch] gh/briancoutinho/2/base -> origin/gh/briancoutinho/2/base 2025-08-14T21:24:05.9334063Z * [new branch] gh/briancoutinho/2/head -> origin/gh/briancoutinho/2/head 2025-08-14T21:24:05.9334217Z * [new branch] gh/c00w/23/base -> origin/gh/c00w/23/base 2025-08-14T21:24:05.9334351Z * [new branch] gh/c00w/23/head -> origin/gh/c00w/23/head 2025-08-14T21:24:05.9336221Z * [new branch] gh/c00w/38/base -> origin/gh/c00w/38/base 2025-08-14T21:24:05.9336838Z * [new branch] gh/c00w/38/head -> origin/gh/c00w/38/head 2025-08-14T21:24:05.9337009Z * [new branch] gh/c00w/38/orig -> origin/gh/c00w/38/orig 2025-08-14T21:24:05.9337166Z * [new branch] gh/c00w/48/base -> origin/gh/c00w/48/base 2025-08-14T21:24:05.9337330Z * [new branch] gh/c00w/48/head -> origin/gh/c00w/48/head 2025-08-14T21:24:05.9337465Z * [new branch] gh/c00w/48/orig -> origin/gh/c00w/48/orig 2025-08-14T21:24:05.9342623Z * [new branch] gh/c00w/50/base -> origin/gh/c00w/50/base 2025-08-14T21:24:05.9347352Z * [new branch] gh/c00w/50/head -> origin/gh/c00w/50/head 2025-08-14T21:24:05.9351751Z * [new branch] gh/c00w/50/orig -> origin/gh/c00w/50/orig 2025-08-14T21:24:05.9356012Z * [new branch] gh/c00w/51/base -> origin/gh/c00w/51/base 2025-08-14T21:24:05.9361125Z * [new branch] gh/c00w/51/head -> origin/gh/c00w/51/head 2025-08-14T21:24:05.9363469Z * [new branch] gh/c00w/51/orig -> origin/gh/c00w/51/orig 2025-08-14T21:24:05.9363640Z * [new branch] gh/c00w/52/base -> origin/gh/c00w/52/base 2025-08-14T21:24:05.9364017Z * [new branch] gh/c00w/52/head -> origin/gh/c00w/52/head 2025-08-14T21:24:05.9364151Z * [new branch] gh/c00w/52/orig -> origin/gh/c00w/52/orig 2025-08-14T21:24:05.9364279Z * [new branch] gh/c00w/53/base -> origin/gh/c00w/53/base 2025-08-14T21:24:05.9364414Z * [new branch] gh/c00w/53/head -> origin/gh/c00w/53/head 2025-08-14T21:24:05.9364540Z * [new branch] gh/c00w/53/orig -> origin/gh/c00w/53/orig 2025-08-14T21:24:05.9364671Z * [new branch] gh/c00w/54/base -> origin/gh/c00w/54/base 2025-08-14T21:24:05.9364796Z * [new branch] gh/c00w/54/head -> origin/gh/c00w/54/head 2025-08-14T21:24:05.9364921Z * [new branch] gh/c00w/54/orig -> origin/gh/c00w/54/orig 2025-08-14T21:24:05.9365094Z * [new branch] gh/chenmillie/1/base -> origin/gh/chenmillie/1/base 2025-08-14T21:24:05.9365335Z * [new branch] gh/chenmillie/1/head -> origin/gh/chenmillie/1/head 2025-08-14T21:24:05.9365493Z * [new branch] gh/chenmillie/1/orig -> origin/gh/chenmillie/1/orig 2025-08-14T21:24:05.9365650Z * [new branch] gh/clee2000/1/base -> origin/gh/clee2000/1/base 2025-08-14T21:24:05.9365789Z * [new branch] gh/clee2000/1/head -> origin/gh/clee2000/1/head 2025-08-14T21:24:05.9365929Z * [new branch] gh/clee2000/1/orig -> origin/gh/clee2000/1/orig 2025-08-14T21:24:05.9366090Z * [new branch] gh/coconutruben/1/base -> origin/gh/coconutruben/1/base 2025-08-14T21:24:05.9366250Z * [new branch] gh/coconutruben/1/head -> origin/gh/coconutruben/1/head 2025-08-14T21:24:05.9366412Z * [new branch] gh/coconutruben/11/base -> origin/gh/coconutruben/11/base 2025-08-14T21:24:05.9366568Z * [new branch] gh/coconutruben/11/head -> origin/gh/coconutruben/11/head 2025-08-14T21:24:05.9366730Z * [new branch] gh/coconutruben/11/orig -> origin/gh/coconutruben/11/orig 2025-08-14T21:24:05.9366884Z * [new branch] gh/coconutruben/12/base -> origin/gh/coconutruben/12/base 2025-08-14T21:24:05.9367032Z * [new branch] gh/coconutruben/12/head -> origin/gh/coconutruben/12/head 2025-08-14T21:24:05.9367188Z * [new branch] gh/coconutruben/12/orig -> origin/gh/coconutruben/12/orig 2025-08-14T21:24:05.9367336Z * [new branch] gh/coconutruben/13/base -> origin/gh/coconutruben/13/base 2025-08-14T21:24:05.9367493Z * [new branch] gh/coconutruben/13/head -> origin/gh/coconutruben/13/head 2025-08-14T21:24:05.9367641Z * [new branch] gh/coconutruben/13/orig -> origin/gh/coconutruben/13/orig 2025-08-14T21:24:05.9372539Z * [new branch] gh/coconutruben/14/base -> origin/gh/coconutruben/14/base 2025-08-14T21:24:05.9374469Z * [new branch] gh/coconutruben/14/head -> origin/gh/coconutruben/14/head 2025-08-14T21:24:05.9374816Z * [new branch] gh/coconutruben/14/orig -> origin/gh/coconutruben/14/orig 2025-08-14T21:24:05.9375007Z * [new branch] gh/coconutruben/15/base -> origin/gh/coconutruben/15/base 2025-08-14T21:24:05.9375261Z * [new branch] gh/coconutruben/15/head -> origin/gh/coconutruben/15/head 2025-08-14T21:24:05.9375445Z * [new branch] gh/coconutruben/15/orig -> origin/gh/coconutruben/15/orig 2025-08-14T21:24:05.9375746Z * [new branch] gh/coconutruben/16/base -> origin/gh/coconutruben/16/base 2025-08-14T21:24:05.9375924Z * [new branch] gh/coconutruben/16/head -> origin/gh/coconutruben/16/head 2025-08-14T21:24:05.9376099Z * [new branch] gh/coconutruben/16/orig -> origin/gh/coconutruben/16/orig 2025-08-14T21:24:05.9376264Z * [new branch] gh/coconutruben/17/base -> origin/gh/coconutruben/17/base 2025-08-14T21:24:05.9376713Z * [new branch] gh/coconutruben/17/head -> origin/gh/coconutruben/17/head 2025-08-14T21:24:05.9376896Z * [new branch] gh/coconutruben/17/orig -> origin/gh/coconutruben/17/orig 2025-08-14T21:24:05.9377052Z * [new branch] gh/coconutruben/18/base -> origin/gh/coconutruben/18/base 2025-08-14T21:24:05.9377218Z * [new branch] gh/coconutruben/18/head -> origin/gh/coconutruben/18/head 2025-08-14T21:24:05.9377376Z * [new branch] gh/coconutruben/18/orig -> origin/gh/coconutruben/18/orig 2025-08-14T21:24:05.9377541Z * [new branch] gh/coconutruben/19/base -> origin/gh/coconutruben/19/base 2025-08-14T21:24:05.9378244Z * [new branch] gh/coconutruben/19/head -> origin/gh/coconutruben/19/head 2025-08-14T21:24:05.9378449Z * [new branch] gh/coconutruben/19/orig -> origin/gh/coconutruben/19/orig 2025-08-14T21:24:05.9385314Z * [new branch] gh/coconutruben/20/base -> origin/gh/coconutruben/20/base 2025-08-14T21:24:05.9388114Z * [new branch] gh/coconutruben/20/head -> origin/gh/coconutruben/20/head 2025-08-14T21:24:05.9389543Z * [new branch] gh/coconutruben/20/orig -> origin/gh/coconutruben/20/orig 2025-08-14T21:24:05.9389833Z * [new branch] gh/coconutruben/21/base -> origin/gh/coconutruben/21/base 2025-08-14T21:24:05.9390004Z * [new branch] gh/coconutruben/21/head -> origin/gh/coconutruben/21/head 2025-08-14T21:24:05.9390246Z * [new branch] gh/coconutruben/21/orig -> origin/gh/coconutruben/21/orig 2025-08-14T21:24:05.9390556Z * [new branch] gh/coconutruben/22/base -> origin/gh/coconutruben/22/base 2025-08-14T21:24:05.9390830Z * [new branch] gh/coconutruben/22/head -> origin/gh/coconutruben/22/head 2025-08-14T21:24:05.9390982Z * [new branch] gh/coconutruben/22/orig -> origin/gh/coconutruben/22/orig 2025-08-14T21:24:05.9391233Z * [new branch] gh/coconutruben/23/base -> origin/gh/coconutruben/23/base 2025-08-14T21:24:05.9396443Z * [new branch] gh/coconutruben/23/head -> origin/gh/coconutruben/23/head 2025-08-14T21:24:05.9401952Z * [new branch] gh/coconutruben/23/orig -> origin/gh/coconutruben/23/orig 2025-08-14T21:24:05.9406243Z * [new branch] gh/coconutruben/24/base -> origin/gh/coconutruben/24/base 2025-08-14T21:24:05.9411261Z * [new branch] gh/coconutruben/24/head -> origin/gh/coconutruben/24/head 2025-08-14T21:24:05.9416231Z * [new branch] gh/coconutruben/24/orig -> origin/gh/coconutruben/24/orig 2025-08-14T21:24:05.9416461Z * [new branch] gh/coconutruben/25/base -> origin/gh/coconutruben/25/base 2025-08-14T21:24:05.9416626Z * [new branch] gh/coconutruben/25/head -> origin/gh/coconutruben/25/head 2025-08-14T21:24:05.9416819Z * [new branch] gh/coconutruben/25/orig -> origin/gh/coconutruben/25/orig 2025-08-14T21:24:05.9416983Z * [new branch] gh/coconutruben/26/base -> origin/gh/coconutruben/26/base 2025-08-14T21:24:05.9417135Z * [new branch] gh/coconutruben/26/head -> origin/gh/coconutruben/26/head 2025-08-14T21:24:05.9417293Z * [new branch] gh/coconutruben/26/orig -> origin/gh/coconutruben/26/orig 2025-08-14T21:24:05.9417446Z * [new branch] gh/coconutruben/27/base -> origin/gh/coconutruben/27/base 2025-08-14T21:24:05.9417602Z * [new branch] gh/coconutruben/27/head -> origin/gh/coconutruben/27/head 2025-08-14T21:24:05.9417765Z * [new branch] gh/coconutruben/27/orig -> origin/gh/coconutruben/27/orig 2025-08-14T21:24:05.9417946Z * [new branch] gh/codingwithsurya/10/base -> origin/gh/codingwithsurya/10/base 2025-08-14T21:24:05.9418118Z * [new branch] gh/codingwithsurya/10/head -> origin/gh/codingwithsurya/10/head 2025-08-14T21:24:05.9418430Z * [new branch] gh/codingwithsurya/10/orig -> origin/gh/codingwithsurya/10/orig 2025-08-14T21:24:05.9418609Z * [new branch] gh/codingwithsurya/11/base -> origin/gh/codingwithsurya/11/base 2025-08-14T21:24:05.9418785Z * [new branch] gh/codingwithsurya/11/head -> origin/gh/codingwithsurya/11/head 2025-08-14T21:24:05.9418951Z * [new branch] gh/codingwithsurya/11/orig -> origin/gh/codingwithsurya/11/orig 2025-08-14T21:24:05.9419132Z * [new branch] gh/codingwithsurya/12/base -> origin/gh/codingwithsurya/12/base 2025-08-14T21:24:05.9419306Z * [new branch] gh/codingwithsurya/12/head -> origin/gh/codingwithsurya/12/head 2025-08-14T21:24:05.9419464Z * [new branch] gh/codingwithsurya/12/orig -> origin/gh/codingwithsurya/12/orig 2025-08-14T21:24:05.9419632Z * [new branch] gh/codingwithsurya/13/base -> origin/gh/codingwithsurya/13/base 2025-08-14T21:24:05.9420050Z * [new branch] gh/codingwithsurya/13/head -> origin/gh/codingwithsurya/13/head 2025-08-14T21:24:05.9420209Z * [new branch] gh/codingwithsurya/13/orig -> origin/gh/codingwithsurya/13/orig 2025-08-14T21:24:05.9420380Z * [new branch] gh/codingwithsurya/14/base -> origin/gh/codingwithsurya/14/base 2025-08-14T21:24:05.9420543Z * [new branch] gh/codingwithsurya/14/head -> origin/gh/codingwithsurya/14/head 2025-08-14T21:24:05.9420721Z * [new branch] gh/codingwithsurya/14/orig -> origin/gh/codingwithsurya/14/orig 2025-08-14T21:24:05.9420886Z * [new branch] gh/codingwithsurya/15/base -> origin/gh/codingwithsurya/15/base 2025-08-14T21:24:05.9421056Z * [new branch] gh/codingwithsurya/15/head -> origin/gh/codingwithsurya/15/head 2025-08-14T21:24:05.9421222Z * [new branch] gh/codingwithsurya/15/orig -> origin/gh/codingwithsurya/15/orig 2025-08-14T21:24:05.9421390Z * [new branch] gh/codingwithsurya/16/base -> origin/gh/codingwithsurya/16/base 2025-08-14T21:24:05.9421556Z * [new branch] gh/codingwithsurya/16/head -> origin/gh/codingwithsurya/16/head 2025-08-14T21:24:05.9421719Z * [new branch] gh/codingwithsurya/16/orig -> origin/gh/codingwithsurya/16/orig 2025-08-14T21:24:05.9421881Z * [new branch] gh/codingwithsurya/17/base -> origin/gh/codingwithsurya/17/base 2025-08-14T21:24:05.9422053Z * [new branch] gh/codingwithsurya/17/head -> origin/gh/codingwithsurya/17/head 2025-08-14T21:24:05.9422217Z * [new branch] gh/codingwithsurya/17/orig -> origin/gh/codingwithsurya/17/orig 2025-08-14T21:24:05.9424315Z * [new branch] gh/codingwithsurya/18/base -> origin/gh/codingwithsurya/18/base 2025-08-14T21:24:05.9424494Z * [new branch] gh/codingwithsurya/18/head -> origin/gh/codingwithsurya/18/head 2025-08-14T21:24:05.9424666Z * [new branch] gh/codingwithsurya/18/orig -> origin/gh/codingwithsurya/18/orig 2025-08-14T21:24:05.9424839Z * [new branch] gh/codingwithsurya/19/base -> origin/gh/codingwithsurya/19/base 2025-08-14T21:24:05.9425008Z * [new branch] gh/codingwithsurya/19/head -> origin/gh/codingwithsurya/19/head 2025-08-14T21:24:05.9430540Z * [new branch] gh/codingwithsurya/19/orig -> origin/gh/codingwithsurya/19/orig 2025-08-14T21:24:05.9436060Z * [new branch] gh/codingwithsurya/20/base -> origin/gh/codingwithsurya/20/base 2025-08-14T21:24:05.9437934Z * [new branch] gh/codingwithsurya/20/head -> origin/gh/codingwithsurya/20/head 2025-08-14T21:24:05.9438252Z * [new branch] gh/codingwithsurya/20/orig -> origin/gh/codingwithsurya/20/orig 2025-08-14T21:24:05.9441396Z * [new branch] gh/codingwithsurya/21/base -> origin/gh/codingwithsurya/21/base 2025-08-14T21:24:05.9446254Z * [new branch] gh/codingwithsurya/21/head -> origin/gh/codingwithsurya/21/head 2025-08-14T21:24:05.9449219Z * [new branch] gh/codingwithsurya/21/orig -> origin/gh/codingwithsurya/21/orig 2025-08-14T21:24:05.9449445Z * [new branch] gh/codingwithsurya/8/base -> origin/gh/codingwithsurya/8/base 2025-08-14T21:24:05.9449626Z * [new branch] gh/codingwithsurya/8/head -> origin/gh/codingwithsurya/8/head 2025-08-14T21:24:05.9449795Z * [new branch] gh/codingwithsurya/8/orig -> origin/gh/codingwithsurya/8/orig 2025-08-14T21:24:05.9449966Z * [new branch] gh/codingwithsurya/9/base -> origin/gh/codingwithsurya/9/base 2025-08-14T21:24:05.9450133Z * [new branch] gh/codingwithsurya/9/head -> origin/gh/codingwithsurya/9/head 2025-08-14T21:24:05.9450301Z * [new branch] gh/codingwithsurya/9/orig -> origin/gh/codingwithsurya/9/orig 2025-08-14T21:24:05.9450464Z * [new branch] gh/colinchan15/1/base -> origin/gh/colinchan15/1/base 2025-08-14T21:24:05.9450681Z * [new branch] gh/colinchan15/1/head -> origin/gh/colinchan15/1/head 2025-08-14T21:24:05.9450843Z * [new branch] gh/colinchan15/2/base -> origin/gh/colinchan15/2/base 2025-08-14T21:24:05.9450987Z * [new branch] gh/colinchan15/2/head -> origin/gh/colinchan15/2/head 2025-08-14T21:24:05.9451135Z * [new branch] gh/colinchan15/3/base -> origin/gh/colinchan15/3/base 2025-08-14T21:24:05.9451295Z * [new branch] gh/colinchan15/3/head -> origin/gh/colinchan15/3/head 2025-08-14T21:24:05.9451470Z * [new branch] gh/colinchan15/4/base -> origin/gh/colinchan15/4/base 2025-08-14T21:24:05.9451623Z * [new branch] gh/colinchan15/4/head -> origin/gh/colinchan15/4/head 2025-08-14T21:24:05.9451773Z * [new branch] gh/colinchan15/5/base -> origin/gh/colinchan15/5/base 2025-08-14T21:24:05.9451921Z * [new branch] gh/colinchan15/5/head -> origin/gh/colinchan15/5/head 2025-08-14T21:24:05.9452079Z * [new branch] gh/colinchan15/6/base -> origin/gh/colinchan15/6/base 2025-08-14T21:24:05.9452227Z * [new branch] gh/colinchan15/6/head -> origin/gh/colinchan15/6/head 2025-08-14T21:24:05.9452406Z * [new branch] gh/davidberard98/351/base -> origin/gh/davidberard98/351/base 2025-08-14T21:24:05.9452573Z * [new branch] gh/davidberard98/351/head -> origin/gh/davidberard98/351/head 2025-08-14T21:24:05.9452735Z * [new branch] gh/davidberard98/351/orig -> origin/gh/davidberard98/351/orig 2025-08-14T21:24:05.9452902Z * [new branch] gh/davidberard98/353/base -> origin/gh/davidberard98/353/base 2025-08-14T21:24:05.9453063Z * [new branch] gh/davidberard98/353/head -> origin/gh/davidberard98/353/head 2025-08-14T21:24:05.9453221Z * [new branch] gh/davidberard98/353/orig -> origin/gh/davidberard98/353/orig 2025-08-14T21:24:05.9453409Z * [new branch] gh/davidberard98/356/base -> origin/gh/davidberard98/356/base 2025-08-14T21:24:05.9453573Z * [new branch] gh/davidberard98/356/head -> origin/gh/davidberard98/356/head 2025-08-14T21:24:05.9453827Z * [new branch] gh/davidberard98/356/orig -> origin/gh/davidberard98/356/orig 2025-08-14T21:24:05.9454059Z * [new branch] gh/davidberard98/382/base -> origin/gh/davidberard98/382/base 2025-08-14T21:24:05.9454226Z * [new branch] gh/davidberard98/382/head -> origin/gh/davidberard98/382/head 2025-08-14T21:24:05.9454404Z * [new branch] gh/davidberard98/382/orig -> origin/gh/davidberard98/382/orig 2025-08-14T21:24:05.9454973Z * [new branch] gh/davidberard98/386/base -> origin/gh/davidberard98/386/base 2025-08-14T21:24:05.9455157Z * [new branch] gh/davidberard98/386/head -> origin/gh/davidberard98/386/head 2025-08-14T21:24:05.9455360Z * [new branch] gh/davidberard98/386/orig -> origin/gh/davidberard98/386/orig 2025-08-14T21:24:05.9455721Z * [new branch] gh/davidberard98/389/base -> origin/gh/davidberard98/389/base 2025-08-14T21:24:05.9455894Z * [new branch] gh/davidberard98/389/head -> origin/gh/davidberard98/389/head 2025-08-14T21:24:05.9456059Z * [new branch] gh/davidberard98/389/orig -> origin/gh/davidberard98/389/orig 2025-08-14T21:24:05.9456230Z * [new branch] gh/davidberard98/390/base -> origin/gh/davidberard98/390/base 2025-08-14T21:24:05.9456404Z * [new branch] gh/davidberard98/390/head -> origin/gh/davidberard98/390/head 2025-08-14T21:24:05.9456590Z * [new branch] gh/davidberard98/390/orig -> origin/gh/davidberard98/390/orig 2025-08-14T21:24:05.9458032Z * [new branch] gh/davidberard98/391/base -> origin/gh/davidberard98/391/base 2025-08-14T21:24:05.9458477Z * [new branch] gh/davidberard98/391/head -> origin/gh/davidberard98/391/head 2025-08-14T21:24:05.9459138Z * [new branch] gh/davidberard98/391/orig -> origin/gh/davidberard98/391/orig 2025-08-14T21:24:05.9464957Z * [new branch] gh/davidberard98/392/base -> origin/gh/davidberard98/392/base 2025-08-14T21:24:05.9469316Z * [new branch] gh/davidberard98/392/head -> origin/gh/davidberard98/392/head 2025-08-14T21:24:05.9474393Z * [new branch] gh/davidberard98/392/orig -> origin/gh/davidberard98/392/orig 2025-08-14T21:24:05.9476510Z * [new branch] gh/davidberard98/393/base -> origin/gh/davidberard98/393/base 2025-08-14T21:24:05.9476692Z * [new branch] gh/davidberard98/393/head -> origin/gh/davidberard98/393/head 2025-08-14T21:24:05.9476848Z * [new branch] gh/davidberard98/393/orig -> origin/gh/davidberard98/393/orig 2025-08-14T21:24:05.9477012Z * [new branch] gh/davidberard98/394/base -> origin/gh/davidberard98/394/base 2025-08-14T21:24:05.9477164Z * [new branch] gh/davidberard98/394/head -> origin/gh/davidberard98/394/head 2025-08-14T21:24:05.9477337Z * [new branch] gh/davidberard98/394/orig -> origin/gh/davidberard98/394/orig 2025-08-14T21:24:05.9477505Z * [new branch] gh/davidberard98/395/base -> origin/gh/davidberard98/395/base 2025-08-14T21:24:05.9477660Z * [new branch] gh/davidberard98/395/head -> origin/gh/davidberard98/395/head 2025-08-14T21:24:05.9477821Z * [new branch] gh/davidberard98/395/orig -> origin/gh/davidberard98/395/orig 2025-08-14T21:24:05.9477972Z * [new branch] gh/davidberard98/396/base -> origin/gh/davidberard98/396/base 2025-08-14T21:24:05.9478121Z * [new branch] gh/davidberard98/396/head -> origin/gh/davidberard98/396/head 2025-08-14T21:24:05.9478282Z * [new branch] gh/davidberard98/396/orig -> origin/gh/davidberard98/396/orig 2025-08-14T21:24:05.9478433Z * [new branch] gh/davidberard98/397/base -> origin/gh/davidberard98/397/base 2025-08-14T21:24:05.9478597Z * [new branch] gh/davidberard98/397/head -> origin/gh/davidberard98/397/head 2025-08-14T21:24:05.9478748Z * [new branch] gh/davidberard98/397/orig -> origin/gh/davidberard98/397/orig 2025-08-14T21:24:05.9478897Z * [new branch] gh/davidberard98/398/base -> origin/gh/davidberard98/398/base 2025-08-14T21:24:05.9479055Z * [new branch] gh/davidberard98/398/head -> origin/gh/davidberard98/398/head 2025-08-14T21:24:05.9479205Z * [new branch] gh/davidberard98/398/orig -> origin/gh/davidberard98/398/orig 2025-08-14T21:24:05.9479360Z * [new branch] gh/desertfire/570/base -> origin/gh/desertfire/570/base 2025-08-14T21:24:05.9481287Z * [new branch] gh/desertfire/570/head -> origin/gh/desertfire/570/head 2025-08-14T21:24:05.9481506Z * [new branch] gh/desertfire/570/orig -> origin/gh/desertfire/570/orig 2025-08-14T21:24:05.9481908Z * [new branch] gh/desertfire/572/base -> origin/gh/desertfire/572/base 2025-08-14T21:24:05.9482203Z * [new branch] gh/desertfire/572/head -> origin/gh/desertfire/572/head 2025-08-14T21:24:05.9482370Z * [new branch] gh/desertfire/572/orig -> origin/gh/desertfire/572/orig 2025-08-14T21:24:05.9482522Z * [new branch] gh/desertfire/589/base -> origin/gh/desertfire/589/base 2025-08-14T21:24:05.9487917Z * [new branch] gh/desertfire/589/head -> origin/gh/desertfire/589/head 2025-08-14T21:24:05.9490064Z * [new branch] gh/desertfire/589/orig -> origin/gh/desertfire/589/orig 2025-08-14T21:24:05.9490361Z * [new branch] gh/desertfire/590/base -> origin/gh/desertfire/590/base 2025-08-14T21:24:05.9493938Z * [new branch] gh/desertfire/590/head -> origin/gh/desertfire/590/head 2025-08-14T21:24:05.9494262Z * [new branch] gh/desertfire/590/orig -> origin/gh/desertfire/590/orig 2025-08-14T21:24:05.9494641Z * [new branch] gh/desertfire/591/base -> origin/gh/desertfire/591/base 2025-08-14T21:24:05.9494810Z * [new branch] gh/desertfire/591/head -> origin/gh/desertfire/591/head 2025-08-14T21:24:05.9494968Z * [new branch] gh/desertfire/591/orig -> origin/gh/desertfire/591/orig 2025-08-14T21:24:05.9495118Z * [new branch] gh/desertfire/592/base -> origin/gh/desertfire/592/base 2025-08-14T21:24:05.9495269Z * [new branch] gh/desertfire/592/head -> origin/gh/desertfire/592/head 2025-08-14T21:24:05.9495424Z * [new branch] gh/desertfire/592/orig -> origin/gh/desertfire/592/orig 2025-08-14T21:24:05.9495572Z * [new branch] gh/desertfire/593/base -> origin/gh/desertfire/593/base 2025-08-14T21:24:05.9495728Z * [new branch] gh/desertfire/593/head -> origin/gh/desertfire/593/head 2025-08-14T21:24:05.9495876Z * [new branch] gh/desertfire/593/orig -> origin/gh/desertfire/593/orig 2025-08-14T21:24:05.9496032Z * [new branch] gh/desertfire/594/base -> origin/gh/desertfire/594/base 2025-08-14T21:24:05.9496189Z * [new branch] gh/desertfire/594/head -> origin/gh/desertfire/594/head 2025-08-14T21:24:05.9496340Z * [new branch] gh/desertfire/594/orig -> origin/gh/desertfire/594/orig 2025-08-14T21:24:05.9496797Z * [new branch] gh/desertfire/595/base -> origin/gh/desertfire/595/base 2025-08-14T21:24:05.9497532Z * [new branch] gh/desertfire/595/head -> origin/gh/desertfire/595/head 2025-08-14T21:24:05.9498161Z * [new branch] gh/desertfire/595/orig -> origin/gh/desertfire/595/orig 2025-08-14T21:24:05.9499417Z * [new branch] gh/desertfire/596/base -> origin/gh/desertfire/596/base 2025-08-14T21:24:05.9500007Z * [new branch] gh/desertfire/596/head -> origin/gh/desertfire/596/head 2025-08-14T21:24:05.9500711Z * [new branch] gh/desertfire/596/orig -> origin/gh/desertfire/596/orig 2025-08-14T21:24:05.9504634Z * [new branch] gh/desertfire/597/base -> origin/gh/desertfire/597/base 2025-08-14T21:24:05.9504826Z * [new branch] gh/desertfire/597/head -> origin/gh/desertfire/597/head 2025-08-14T21:24:05.9504992Z * [new branch] gh/desertfire/597/orig -> origin/gh/desertfire/597/orig 2025-08-14T21:24:05.9505150Z * [new branch] gh/dharakk/1/base -> origin/gh/dharakk/1/base 2025-08-14T21:24:05.9511608Z * [new branch] gh/dharakk/1/head -> origin/gh/dharakk/1/head 2025-08-14T21:24:05.9511973Z * [new branch] gh/dharakk/4/base -> origin/gh/dharakk/4/base 2025-08-14T21:24:05.9512267Z * [new branch] gh/dharakk/4/head -> origin/gh/dharakk/4/head 2025-08-14T21:24:05.9512425Z * [new branch] gh/dharakk/4/orig -> origin/gh/dharakk/4/orig 2025-08-14T21:24:05.9512762Z * [new branch] gh/drisspg/140/base -> origin/gh/drisspg/140/base 2025-08-14T21:24:05.9512923Z * [new branch] gh/drisspg/140/head -> origin/gh/drisspg/140/head 2025-08-14T21:24:05.9513076Z * [new branch] gh/drisspg/140/orig -> origin/gh/drisspg/140/orig 2025-08-14T21:24:05.9513213Z * [new branch] gh/drisspg/149/base -> origin/gh/drisspg/149/base 2025-08-14T21:24:05.9513349Z * [new branch] gh/drisspg/149/head -> origin/gh/drisspg/149/head 2025-08-14T21:24:05.9513493Z * [new branch] gh/drisspg/149/orig -> origin/gh/drisspg/149/orig 2025-08-14T21:24:05.9513652Z * [new branch] gh/drisspg/150/base -> origin/gh/drisspg/150/base 2025-08-14T21:24:05.9514660Z * [new branch] gh/drisspg/150/head -> origin/gh/drisspg/150/head 2025-08-14T21:24:05.9515200Z * [new branch] gh/drisspg/150/orig -> origin/gh/drisspg/150/orig 2025-08-14T21:24:05.9516404Z * [new branch] gh/drisspg/151/base -> origin/gh/drisspg/151/base 2025-08-14T21:24:05.9516658Z * [new branch] gh/drisspg/151/head -> origin/gh/drisspg/151/head 2025-08-14T21:24:05.9517644Z * [new branch] gh/drisspg/151/orig -> origin/gh/drisspg/151/orig 2025-08-14T21:24:05.9518674Z * [new branch] gh/drisspg/158/base -> origin/gh/drisspg/158/base 2025-08-14T21:24:05.9519013Z * [new branch] gh/drisspg/158/head -> origin/gh/drisspg/158/head 2025-08-14T21:24:05.9522137Z * [new branch] gh/drisspg/158/orig -> origin/gh/drisspg/158/orig 2025-08-14T21:24:05.9522329Z * [new branch] gh/drisspg/159/base -> origin/gh/drisspg/159/base 2025-08-14T21:24:05.9522482Z * [new branch] gh/drisspg/159/head -> origin/gh/drisspg/159/head 2025-08-14T21:24:05.9522817Z * [new branch] gh/drisspg/159/orig -> origin/gh/drisspg/159/orig 2025-08-14T21:24:05.9527840Z * [new branch] gh/drisspg/166/base -> origin/gh/drisspg/166/base 2025-08-14T21:24:05.9528043Z * [new branch] gh/drisspg/166/head -> origin/gh/drisspg/166/head 2025-08-14T21:24:05.9528190Z * [new branch] gh/drisspg/166/orig -> origin/gh/drisspg/166/orig 2025-08-14T21:24:05.9528329Z * [new branch] gh/drisspg/168/base -> origin/gh/drisspg/168/base 2025-08-14T21:24:05.9528480Z * [new branch] gh/drisspg/168/head -> origin/gh/drisspg/168/head 2025-08-14T21:24:05.9528623Z * [new branch] gh/drisspg/168/orig -> origin/gh/drisspg/168/orig 2025-08-14T21:24:05.9528766Z * [new branch] gh/drisspg/169/base -> origin/gh/drisspg/169/base 2025-08-14T21:24:05.9529253Z * [new branch] gh/drisspg/169/head -> origin/gh/drisspg/169/head 2025-08-14T21:24:05.9529882Z * [new branch] gh/drisspg/169/orig -> origin/gh/drisspg/169/orig 2025-08-14T21:24:05.9531112Z * [new branch] gh/drisspg/170/base -> origin/gh/drisspg/170/base 2025-08-14T21:24:05.9531260Z * [new branch] gh/drisspg/170/head -> origin/gh/drisspg/170/head 2025-08-14T21:24:05.9533387Z * [new branch] gh/drisspg/170/orig -> origin/gh/drisspg/170/orig 2025-08-14T21:24:05.9533569Z * [new branch] gh/drisspg/171/base -> origin/gh/drisspg/171/base 2025-08-14T21:24:05.9533728Z * [new branch] gh/drisspg/171/head -> origin/gh/drisspg/171/head 2025-08-14T21:24:05.9534668Z * [new branch] gh/drisspg/171/orig -> origin/gh/drisspg/171/orig 2025-08-14T21:24:05.9535586Z * [new branch] gh/drisspg/172/base -> origin/gh/drisspg/172/base 2025-08-14T21:24:05.9535916Z * [new branch] gh/drisspg/172/head -> origin/gh/drisspg/172/head 2025-08-14T21:24:05.9536940Z * [new branch] gh/drisspg/172/orig -> origin/gh/drisspg/172/orig 2025-08-14T21:24:05.9537921Z * [new branch] gh/drisspg/173/base -> origin/gh/drisspg/173/base 2025-08-14T21:24:05.9538258Z * [new branch] gh/drisspg/173/head -> origin/gh/drisspg/173/head 2025-08-14T21:24:05.9539223Z * [new branch] gh/drisspg/173/orig -> origin/gh/drisspg/173/orig 2025-08-14T21:24:05.9540312Z * [new branch] gh/drisspg/174/base -> origin/gh/drisspg/174/base 2025-08-14T21:24:05.9540632Z * [new branch] gh/drisspg/174/head -> origin/gh/drisspg/174/head 2025-08-14T21:24:05.9541747Z * [new branch] gh/drisspg/174/orig -> origin/gh/drisspg/174/orig 2025-08-14T21:24:05.9543447Z * [new branch] gh/drisspg/175/base -> origin/gh/drisspg/175/base 2025-08-14T21:24:05.9543850Z * [new branch] gh/drisspg/175/head -> origin/gh/drisspg/175/head 2025-08-14T21:24:05.9544521Z * [new branch] gh/drisspg/175/orig -> origin/gh/drisspg/175/orig 2025-08-14T21:24:05.9549472Z * [new branch] gh/drisspg/176/base -> origin/gh/drisspg/176/base 2025-08-14T21:24:05.9549662Z * [new branch] gh/drisspg/176/head -> origin/gh/drisspg/176/head 2025-08-14T21:24:05.9549812Z * [new branch] gh/drisspg/176/orig -> origin/gh/drisspg/176/orig 2025-08-14T21:24:05.9549952Z * [new branch] gh/drisspg/177/base -> origin/gh/drisspg/177/base 2025-08-14T21:24:05.9550098Z * [new branch] gh/drisspg/177/head -> origin/gh/drisspg/177/head 2025-08-14T21:24:05.9550239Z * [new branch] gh/drisspg/177/orig -> origin/gh/drisspg/177/orig 2025-08-14T21:24:05.9550821Z * [new branch] gh/drisspg/178/base -> origin/gh/drisspg/178/base 2025-08-14T21:24:05.9551365Z * [new branch] gh/drisspg/178/head -> origin/gh/drisspg/178/head 2025-08-14T21:24:05.9552381Z * [new branch] gh/drisspg/178/orig -> origin/gh/drisspg/178/orig 2025-08-14T21:24:05.9553014Z * [new branch] gh/drisspg/179/base -> origin/gh/drisspg/179/base 2025-08-14T21:24:05.9553615Z * [new branch] gh/drisspg/179/head -> origin/gh/drisspg/179/head 2025-08-14T21:24:05.9554898Z * [new branch] gh/drisspg/179/orig -> origin/gh/drisspg/179/orig 2025-08-14T21:24:05.9555222Z * [new branch] gh/drisspg/180/base -> origin/gh/drisspg/180/base 2025-08-14T21:24:05.9556468Z * [new branch] gh/drisspg/180/head -> origin/gh/drisspg/180/head 2025-08-14T21:24:05.9556617Z * [new branch] gh/drisspg/180/orig -> origin/gh/drisspg/180/orig 2025-08-14T21:24:05.9560400Z * [new branch] gh/drisspg/181/base -> origin/gh/drisspg/181/base 2025-08-14T21:24:05.9560593Z * [new branch] gh/drisspg/181/head -> origin/gh/drisspg/181/head 2025-08-14T21:24:05.9560767Z * [new branch] gh/drisspg/181/orig -> origin/gh/drisspg/181/orig 2025-08-14T21:24:05.9560923Z * [new branch] gh/drisspg/182/base -> origin/gh/drisspg/182/base 2025-08-14T21:24:05.9561065Z * [new branch] gh/drisspg/182/head -> origin/gh/drisspg/182/head 2025-08-14T21:24:05.9561243Z * [new branch] gh/drisspg/183/base -> origin/gh/drisspg/183/base 2025-08-14T21:24:05.9561817Z * [new branch] gh/drisspg/183/head -> origin/gh/drisspg/183/head 2025-08-14T21:24:05.9563186Z * [new branch] gh/drisspg/184/base -> origin/gh/drisspg/184/base 2025-08-14T21:24:05.9563340Z * [new branch] gh/drisspg/184/head -> origin/gh/drisspg/184/head 2025-08-14T21:24:05.9564452Z * [new branch] gh/drisspg/185/base -> origin/gh/drisspg/185/base 2025-08-14T21:24:05.9564733Z * [new branch] gh/drisspg/185/head -> origin/gh/drisspg/185/head 2025-08-14T21:24:05.9566574Z * [new branch] gh/dsjohns2/1/base -> origin/gh/dsjohns2/1/base 2025-08-14T21:24:05.9567357Z * [new branch] gh/dsjohns2/1/head -> origin/gh/dsjohns2/1/head 2025-08-14T21:24:05.9567942Z * [new branch] gh/eellison/784/base -> origin/gh/eellison/784/base 2025-08-14T21:24:05.9568897Z * [new branch] gh/eellison/784/head -> origin/gh/eellison/784/head 2025-08-14T21:24:05.9569493Z * [new branch] gh/eellison/784/orig -> origin/gh/eellison/784/orig 2025-08-14T21:24:05.9570960Z * [new branch] gh/eellison/785/base -> origin/gh/eellison/785/base 2025-08-14T21:24:05.9571405Z * [new branch] gh/eellison/785/head -> origin/gh/eellison/785/head 2025-08-14T21:24:05.9573994Z * [new branch] gh/eellison/785/orig -> origin/gh/eellison/785/orig 2025-08-14T21:24:05.9574196Z * [new branch] gh/eellison/789/base -> origin/gh/eellison/789/base 2025-08-14T21:24:05.9574355Z * [new branch] gh/eellison/789/head -> origin/gh/eellison/789/head 2025-08-14T21:24:05.9574493Z * [new branch] gh/eellison/789/orig -> origin/gh/eellison/789/orig 2025-08-14T21:24:05.9575913Z * [new branch] gh/eellison/800/base -> origin/gh/eellison/800/base 2025-08-14T21:24:05.9576608Z * [new branch] gh/eellison/800/head -> origin/gh/eellison/800/head 2025-08-14T21:24:05.9576820Z * [new branch] gh/eellison/800/orig -> origin/gh/eellison/800/orig 2025-08-14T21:24:05.9578162Z * [new branch] gh/eellison/801/base -> origin/gh/eellison/801/base 2025-08-14T21:24:05.9578398Z * [new branch] gh/eellison/801/head -> origin/gh/eellison/801/head 2025-08-14T21:24:05.9579442Z * [new branch] gh/eellison/801/orig -> origin/gh/eellison/801/orig 2025-08-14T21:24:05.9580496Z * [new branch] gh/eellison/802/base -> origin/gh/eellison/802/base 2025-08-14T21:24:05.9580662Z * [new branch] gh/eellison/802/head -> origin/gh/eellison/802/head 2025-08-14T21:24:05.9583670Z * [new branch] gh/eellison/802/orig -> origin/gh/eellison/802/orig 2025-08-14T21:24:05.9588430Z * [new branch] gh/eellison/805/base -> origin/gh/eellison/805/base 2025-08-14T21:24:05.9593197Z * [new branch] gh/eellison/805/head -> origin/gh/eellison/805/head 2025-08-14T21:24:05.9597394Z * [new branch] gh/eellison/805/orig -> origin/gh/eellison/805/orig 2025-08-14T21:24:05.9597588Z * [new branch] gh/eellison/808/base -> origin/gh/eellison/808/base 2025-08-14T21:24:05.9597732Z * [new branch] gh/eellison/808/head -> origin/gh/eellison/808/head 2025-08-14T21:24:05.9597885Z * [new branch] gh/eellison/808/orig -> origin/gh/eellison/808/orig 2025-08-14T21:24:05.9598091Z * [new branch] gh/eellison/809/base -> origin/gh/eellison/809/base 2025-08-14T21:24:05.9598239Z * [new branch] gh/eellison/809/head -> origin/gh/eellison/809/head 2025-08-14T21:24:05.9598381Z * [new branch] gh/eellison/809/orig -> origin/gh/eellison/809/orig 2025-08-14T21:24:05.9598523Z * [new branch] gh/eellison/810/base -> origin/gh/eellison/810/base 2025-08-14T21:24:05.9598669Z * [new branch] gh/eellison/810/head -> origin/gh/eellison/810/head 2025-08-14T21:24:05.9598809Z * [new branch] gh/eellison/810/orig -> origin/gh/eellison/810/orig 2025-08-14T21:24:05.9598953Z * [new branch] gh/eellison/811/base -> origin/gh/eellison/811/base 2025-08-14T21:24:05.9599092Z * [new branch] gh/eellison/811/head -> origin/gh/eellison/811/head 2025-08-14T21:24:05.9599229Z * [new branch] gh/eellison/811/orig -> origin/gh/eellison/811/orig 2025-08-14T21:24:05.9599551Z * [new branch] gh/eellison/812/base -> origin/gh/eellison/812/base 2025-08-14T21:24:05.9599697Z * [new branch] gh/eellison/812/head -> origin/gh/eellison/812/head 2025-08-14T21:24:05.9599850Z * [new branch] gh/eellison/812/orig -> origin/gh/eellison/812/orig 2025-08-14T21:24:05.9599991Z * [new branch] gh/eellison/813/base -> origin/gh/eellison/813/base 2025-08-14T21:24:05.9600130Z * [new branch] gh/eellison/813/head -> origin/gh/eellison/813/head 2025-08-14T21:24:05.9600283Z * [new branch] gh/eellison/813/orig -> origin/gh/eellison/813/orig 2025-08-14T21:24:05.9600431Z * [new branch] gh/etaf/132/base -> origin/gh/etaf/132/base 2025-08-14T21:24:05.9600568Z * [new branch] gh/etaf/132/head -> origin/gh/etaf/132/head 2025-08-14T21:24:05.9600710Z * [new branch] gh/etaf/132/orig -> origin/gh/etaf/132/orig 2025-08-14T21:24:05.9601347Z * [new branch] gh/etaf/138/base -> origin/gh/etaf/138/base 2025-08-14T21:24:05.9601546Z * [new branch] gh/etaf/138/head -> origin/gh/etaf/138/head 2025-08-14T21:24:05.9602479Z * [new branch] gh/etaf/138/orig -> origin/gh/etaf/138/orig 2025-08-14T21:24:05.9606706Z * [new branch] gh/etaf/140/base -> origin/gh/etaf/140/base 2025-08-14T21:24:05.9606890Z * [new branch] gh/etaf/140/head -> origin/gh/etaf/140/head 2025-08-14T21:24:05.9607023Z * [new branch] gh/etaf/140/orig -> origin/gh/etaf/140/orig 2025-08-14T21:24:05.9607161Z * [new branch] gh/etaf/143/base -> origin/gh/etaf/143/base 2025-08-14T21:24:05.9607290Z * [new branch] gh/etaf/143/head -> origin/gh/etaf/143/head 2025-08-14T21:24:05.9607430Z * [new branch] gh/etaf/143/orig -> origin/gh/etaf/143/orig 2025-08-14T21:24:05.9608153Z * [new branch] gh/etaf/147/base -> origin/gh/etaf/147/base 2025-08-14T21:24:05.9608753Z * [new branch] gh/etaf/147/head -> origin/gh/etaf/147/head 2025-08-14T21:24:05.9613858Z * [new branch] gh/etaf/148/base -> origin/gh/etaf/148/base 2025-08-14T21:24:05.9614037Z * [new branch] gh/etaf/148/head -> origin/gh/etaf/148/head 2025-08-14T21:24:05.9614172Z * [new branch] gh/etaf/148/orig -> origin/gh/etaf/148/orig 2025-08-14T21:24:05.9614311Z * [new branch] gh/etaf/149/base -> origin/gh/etaf/149/base 2025-08-14T21:24:05.9614442Z * [new branch] gh/etaf/149/head -> origin/gh/etaf/149/head 2025-08-14T21:24:05.9614580Z * [new branch] gh/etaf/149/orig -> origin/gh/etaf/149/orig 2025-08-14T21:24:05.9614942Z * [new branch] gh/etaf/150/base -> origin/gh/etaf/150/base 2025-08-14T21:24:05.9615200Z * [new branch] gh/etaf/150/head -> origin/gh/etaf/150/head 2025-08-14T21:24:05.9616260Z * [new branch] gh/etaf/150/orig -> origin/gh/etaf/150/orig 2025-08-14T21:24:05.9617268Z * [new branch] gh/etaf/151/base -> origin/gh/etaf/151/base 2025-08-14T21:24:05.9617768Z * [new branch] gh/etaf/151/head -> origin/gh/etaf/151/head 2025-08-14T21:24:05.9618735Z * [new branch] gh/etaf/151/orig -> origin/gh/etaf/151/orig 2025-08-14T21:24:05.9620015Z * [new branch] gh/etaf/152/base -> origin/gh/etaf/152/base 2025-08-14T21:24:05.9620172Z * [new branch] gh/etaf/152/head -> origin/gh/etaf/152/head 2025-08-14T21:24:05.9621282Z * [new branch] gh/etaf/152/orig -> origin/gh/etaf/152/orig 2025-08-14T21:24:05.9622334Z * [new branch] gh/etaf/153/base -> origin/gh/etaf/153/base 2025-08-14T21:24:05.9623957Z * [new branch] gh/etaf/153/head -> origin/gh/etaf/153/head 2025-08-14T21:24:05.9624101Z * [new branch] gh/etaf/153/orig -> origin/gh/etaf/153/orig 2025-08-14T21:24:05.9624251Z * [new branch] gh/etaf/154/base -> origin/gh/etaf/154/base 2025-08-14T21:24:05.9625284Z * [new branch] gh/etaf/154/head -> origin/gh/etaf/154/head 2025-08-14T21:24:05.9625550Z * [new branch] gh/etaf/154/orig -> origin/gh/etaf/154/orig 2025-08-14T21:24:05.9627515Z * [new branch] gh/etaf/155/base -> origin/gh/etaf/155/base 2025-08-14T21:24:05.9627804Z * [new branch] gh/etaf/155/head -> origin/gh/etaf/155/head 2025-08-14T21:24:05.9628876Z * [new branch] gh/etaf/155/orig -> origin/gh/etaf/155/orig 2025-08-14T21:24:05.9630316Z * [new branch] gh/ezyang/2374/base -> origin/gh/ezyang/2374/base 2025-08-14T21:24:05.9631050Z * [new branch] gh/ezyang/2374/head -> origin/gh/ezyang/2374/head 2025-08-14T21:24:05.9631602Z * [new branch] gh/ezyang/2374/orig -> origin/gh/ezyang/2374/orig 2025-08-14T21:24:05.9632779Z * [new branch] gh/ezyang/2973/base -> origin/gh/ezyang/2973/base 2025-08-14T21:24:05.9633114Z * [new branch] gh/ezyang/2973/head -> origin/gh/ezyang/2973/head 2025-08-14T21:24:05.9634277Z * [new branch] gh/ezyang/2973/orig -> origin/gh/ezyang/2973/orig 2025-08-14T21:24:05.9634865Z * [new branch] gh/ezyang/2974/base -> origin/gh/ezyang/2974/base 2025-08-14T21:24:05.9635706Z * [new branch] gh/ezyang/2974/head -> origin/gh/ezyang/2974/head 2025-08-14T21:24:05.9636098Z * [new branch] gh/ezyang/2974/orig -> origin/gh/ezyang/2974/orig 2025-08-14T21:24:05.9637324Z * [new branch] gh/ezyang/3068/base -> origin/gh/ezyang/3068/base 2025-08-14T21:24:05.9637610Z * [new branch] gh/ezyang/3068/head -> origin/gh/ezyang/3068/head 2025-08-14T21:24:05.9638636Z * [new branch] gh/ezyang/3068/orig -> origin/gh/ezyang/3068/orig 2025-08-14T21:24:05.9639534Z * [new branch] gh/ezyang/3071/base -> origin/gh/ezyang/3071/base 2025-08-14T21:24:05.9639787Z * [new branch] gh/ezyang/3071/head -> origin/gh/ezyang/3071/head 2025-08-14T21:24:05.9640855Z * [new branch] gh/ezyang/3071/orig -> origin/gh/ezyang/3071/orig 2025-08-14T21:24:05.9641605Z * [new branch] gh/ezyang/3074/base -> origin/gh/ezyang/3074/base 2025-08-14T21:24:05.9642133Z * [new branch] gh/ezyang/3074/head -> origin/gh/ezyang/3074/head 2025-08-14T21:24:05.9648850Z * [new branch] gh/ezyang/3074/orig -> origin/gh/ezyang/3074/orig 2025-08-14T21:24:05.9649777Z * [new branch] gh/ezyang/3088/base -> origin/gh/ezyang/3088/base 2025-08-14T21:24:05.9650134Z * [new branch] gh/ezyang/3088/head -> origin/gh/ezyang/3088/head 2025-08-14T21:24:05.9651125Z * [new branch] gh/ezyang/3088/orig -> origin/gh/ezyang/3088/orig 2025-08-14T21:24:05.9652015Z * [new branch] gh/ezyang/3092/base -> origin/gh/ezyang/3092/base 2025-08-14T21:24:05.9652360Z * [new branch] gh/ezyang/3092/head -> origin/gh/ezyang/3092/head 2025-08-14T21:24:05.9653464Z * [new branch] gh/ezyang/3092/orig -> origin/gh/ezyang/3092/orig 2025-08-14T21:24:05.9654420Z * [new branch] gh/ezyang/3097/base -> origin/gh/ezyang/3097/base 2025-08-14T21:24:05.9654732Z * [new branch] gh/ezyang/3097/head -> origin/gh/ezyang/3097/head 2025-08-14T21:24:05.9655934Z * [new branch] gh/ezyang/3097/orig -> origin/gh/ezyang/3097/orig 2025-08-14T21:24:05.9656874Z * [new branch] gh/ezyang/3098/base -> origin/gh/ezyang/3098/base 2025-08-14T21:24:05.9657246Z * [new branch] gh/ezyang/3098/head -> origin/gh/ezyang/3098/head 2025-08-14T21:24:05.9658316Z * [new branch] gh/ezyang/3098/orig -> origin/gh/ezyang/3098/orig 2025-08-14T21:24:05.9658980Z * [new branch] gh/ezyang/3099/base -> origin/gh/ezyang/3099/base 2025-08-14T21:24:05.9659623Z * [new branch] gh/ezyang/3099/head -> origin/gh/ezyang/3099/head 2025-08-14T21:24:05.9660672Z * [new branch] gh/ezyang/3099/orig -> origin/gh/ezyang/3099/orig 2025-08-14T21:24:05.9661698Z * [new branch] gh/ezyang/3100/base -> origin/gh/ezyang/3100/base 2025-08-14T21:24:05.9661962Z * [new branch] gh/ezyang/3100/head -> origin/gh/ezyang/3100/head 2025-08-14T21:24:05.9664581Z * [new branch] gh/ezyang/3100/orig -> origin/gh/ezyang/3100/orig 2025-08-14T21:24:05.9665048Z * [new branch] gh/ezyang/3101/base -> origin/gh/ezyang/3101/base 2025-08-14T21:24:05.9665212Z * [new branch] gh/ezyang/3101/head -> origin/gh/ezyang/3101/head 2025-08-14T21:24:05.9665365Z * [new branch] gh/ezyang/3101/orig -> origin/gh/ezyang/3101/orig 2025-08-14T21:24:05.9666262Z * [new branch] gh/ezyang/3102/base -> origin/gh/ezyang/3102/base 2025-08-14T21:24:05.9666625Z * [new branch] gh/ezyang/3102/head -> origin/gh/ezyang/3102/head 2025-08-14T21:24:05.9669603Z * [new branch] gh/ezyang/3102/orig -> origin/gh/ezyang/3102/orig 2025-08-14T21:24:05.9669945Z * [new branch] gh/ezyang/3103/base -> origin/gh/ezyang/3103/base 2025-08-14T21:24:05.9670127Z * [new branch] gh/ezyang/3103/head -> origin/gh/ezyang/3103/head 2025-08-14T21:24:05.9670315Z * [new branch] gh/ezyang/3103/orig -> origin/gh/ezyang/3103/orig 2025-08-14T21:24:05.9671095Z * [new branch] gh/ezyang/3104/base -> origin/gh/ezyang/3104/base 2025-08-14T21:24:05.9674931Z * [new branch] gh/ezyang/3104/head -> origin/gh/ezyang/3104/head 2025-08-14T21:24:05.9675118Z * [new branch] gh/ezyang/3104/orig -> origin/gh/ezyang/3104/orig 2025-08-14T21:24:05.9680406Z * [new branch] gh/ezyang/3105/base -> origin/gh/ezyang/3105/base 2025-08-14T21:24:05.9684638Z * [new branch] gh/ezyang/3105/head -> origin/gh/ezyang/3105/head 2025-08-14T21:24:05.9688829Z * [new branch] gh/ezyang/3105/orig -> origin/gh/ezyang/3105/orig 2025-08-14T21:24:05.9693756Z * [new branch] gh/ezyang/3106/base -> origin/gh/ezyang/3106/base 2025-08-14T21:24:05.9695787Z * [new branch] gh/ezyang/3106/head -> origin/gh/ezyang/3106/head 2025-08-14T21:24:05.9696036Z * [new branch] gh/ezyang/3106/orig -> origin/gh/ezyang/3106/orig 2025-08-14T21:24:05.9696698Z * [new branch] gh/ezyang/3107/base -> origin/gh/ezyang/3107/base 2025-08-14T21:24:05.9696892Z * [new branch] gh/ezyang/3107/head -> origin/gh/ezyang/3107/head 2025-08-14T21:24:05.9697045Z * [new branch] gh/ezyang/3107/orig -> origin/gh/ezyang/3107/orig 2025-08-14T21:24:05.9697189Z * [new branch] gh/ezyang/3108/base -> origin/gh/ezyang/3108/base 2025-08-14T21:24:05.9697339Z * [new branch] gh/ezyang/3108/head -> origin/gh/ezyang/3108/head 2025-08-14T21:24:05.9697481Z * [new branch] gh/ezyang/3108/orig -> origin/gh/ezyang/3108/orig 2025-08-14T21:24:05.9697623Z * [new branch] gh/ezyang/3109/base -> origin/gh/ezyang/3109/base 2025-08-14T21:24:05.9697770Z * [new branch] gh/ezyang/3109/head -> origin/gh/ezyang/3109/head 2025-08-14T21:24:05.9697909Z * [new branch] gh/ezyang/3109/orig -> origin/gh/ezyang/3109/orig 2025-08-14T21:24:05.9698228Z * [new branch] gh/ezyang/3110/base -> origin/gh/ezyang/3110/base 2025-08-14T21:24:05.9698380Z * [new branch] gh/ezyang/3110/head -> origin/gh/ezyang/3110/head 2025-08-14T21:24:05.9698560Z * [new branch] gh/ezyang/3110/orig -> origin/gh/ezyang/3110/orig 2025-08-14T21:24:05.9698721Z * [new branch] gh/ezyang/3111/base -> origin/gh/ezyang/3111/base 2025-08-14T21:24:05.9698868Z * [new branch] gh/ezyang/3111/head -> origin/gh/ezyang/3111/head 2025-08-14T21:24:05.9699000Z * [new branch] gh/ezyang/3111/orig -> origin/gh/ezyang/3111/orig 2025-08-14T21:24:05.9699141Z * [new branch] gh/ezyang/3112/base -> origin/gh/ezyang/3112/base 2025-08-14T21:24:05.9699288Z * [new branch] gh/ezyang/3112/head -> origin/gh/ezyang/3112/head 2025-08-14T21:24:05.9699483Z * [new branch] gh/ezyang/3112/orig -> origin/gh/ezyang/3112/orig 2025-08-14T21:24:05.9699623Z * [new branch] gh/ezyang/3113/base -> origin/gh/ezyang/3113/base 2025-08-14T21:24:05.9699848Z * [new branch] gh/ezyang/3113/head -> origin/gh/ezyang/3113/head 2025-08-14T21:24:05.9700003Z * [new branch] gh/ezyang/3113/orig -> origin/gh/ezyang/3113/orig 2025-08-14T21:24:05.9700139Z * [new branch] gh/ezyang/3114/base -> origin/gh/ezyang/3114/base 2025-08-14T21:24:05.9700285Z * [new branch] gh/ezyang/3114/head -> origin/gh/ezyang/3114/head 2025-08-14T21:24:05.9700422Z * [new branch] gh/ezyang/3114/orig -> origin/gh/ezyang/3114/orig 2025-08-14T21:24:05.9700561Z * [new branch] gh/ezyang/3115/base -> origin/gh/ezyang/3115/base 2025-08-14T21:24:05.9700707Z * [new branch] gh/ezyang/3115/head -> origin/gh/ezyang/3115/head 2025-08-14T21:24:05.9700853Z * [new branch] gh/ezyang/3115/orig -> origin/gh/ezyang/3115/orig 2025-08-14T21:24:05.9705115Z * [new branch] gh/ezyang/3116/base -> origin/gh/ezyang/3116/base 2025-08-14T21:24:05.9707558Z * [new branch] gh/ezyang/3116/head -> origin/gh/ezyang/3116/head 2025-08-14T21:24:05.9712932Z * [new branch] gh/ezyang/3116/orig -> origin/gh/ezyang/3116/orig 2025-08-14T21:24:05.9717204Z * [new branch] gh/ezyang/3117/base -> origin/gh/ezyang/3117/base 2025-08-14T21:24:05.9721597Z * [new branch] gh/ezyang/3117/head -> origin/gh/ezyang/3117/head 2025-08-14T21:24:05.9726112Z * [new branch] gh/ezyang/3117/orig -> origin/gh/ezyang/3117/orig 2025-08-14T21:24:05.9727935Z * [new branch] gh/ezyang/3118/base -> origin/gh/ezyang/3118/base 2025-08-14T21:24:05.9728091Z * [new branch] gh/ezyang/3118/head -> origin/gh/ezyang/3118/head 2025-08-14T21:24:05.9728286Z * [new branch] gh/ezyang/3118/orig -> origin/gh/ezyang/3118/orig 2025-08-14T21:24:05.9728424Z * [new branch] gh/ezyang/3119/base -> origin/gh/ezyang/3119/base 2025-08-14T21:24:05.9728559Z * [new branch] gh/ezyang/3119/head -> origin/gh/ezyang/3119/head 2025-08-14T21:24:05.9728701Z * [new branch] gh/ezyang/3119/orig -> origin/gh/ezyang/3119/orig 2025-08-14T21:24:05.9728833Z * [new branch] gh/ezyang/3120/base -> origin/gh/ezyang/3120/base 2025-08-14T21:24:05.9728973Z * [new branch] gh/ezyang/3120/head -> origin/gh/ezyang/3120/head 2025-08-14T21:24:05.9729110Z * [new branch] gh/ezyang/3120/orig -> origin/gh/ezyang/3120/orig 2025-08-14T21:24:05.9729246Z * [new branch] gh/ezyang/3121/base -> origin/gh/ezyang/3121/base 2025-08-14T21:24:05.9729391Z * [new branch] gh/ezyang/3121/head -> origin/gh/ezyang/3121/head 2025-08-14T21:24:05.9729666Z * [new branch] gh/ezyang/3121/orig -> origin/gh/ezyang/3121/orig 2025-08-14T21:24:05.9729819Z * [new branch] gh/ezyang/3122/base -> origin/gh/ezyang/3122/base 2025-08-14T21:24:05.9729964Z * [new branch] gh/ezyang/3122/head -> origin/gh/ezyang/3122/head 2025-08-14T21:24:05.9730107Z * [new branch] gh/ezyang/3122/orig -> origin/gh/ezyang/3122/orig 2025-08-14T21:24:05.9730255Z * [new branch] gh/ezyang/3123/base -> origin/gh/ezyang/3123/base 2025-08-14T21:24:05.9730394Z * [new branch] gh/ezyang/3123/head -> origin/gh/ezyang/3123/head 2025-08-14T21:24:05.9730537Z * [new branch] gh/ezyang/3123/orig -> origin/gh/ezyang/3123/orig 2025-08-14T21:24:05.9730687Z * [new branch] gh/ezyang/3124/base -> origin/gh/ezyang/3124/base 2025-08-14T21:24:05.9730835Z * [new branch] gh/ezyang/3124/head -> origin/gh/ezyang/3124/head 2025-08-14T21:24:05.9731048Z * [new branch] gh/ezyang/3124/orig -> origin/gh/ezyang/3124/orig 2025-08-14T21:24:05.9731189Z * [new branch] gh/ezyang/3125/base -> origin/gh/ezyang/3125/base 2025-08-14T21:24:05.9731322Z * [new branch] gh/ezyang/3125/head -> origin/gh/ezyang/3125/head 2025-08-14T21:24:05.9731463Z * [new branch] gh/ezyang/3125/orig -> origin/gh/ezyang/3125/orig 2025-08-14T21:24:05.9731596Z * [new branch] gh/ezyang/3126/base -> origin/gh/ezyang/3126/base 2025-08-14T21:24:05.9731736Z * [new branch] gh/ezyang/3126/head -> origin/gh/ezyang/3126/head 2025-08-14T21:24:05.9731870Z * [new branch] gh/ezyang/3126/orig -> origin/gh/ezyang/3126/orig 2025-08-14T21:24:05.9732004Z * [new branch] gh/ezyang/3127/base -> origin/gh/ezyang/3127/base 2025-08-14T21:24:05.9732148Z * [new branch] gh/ezyang/3127/head -> origin/gh/ezyang/3127/head 2025-08-14T21:24:05.9732292Z * [new branch] gh/ezyang/3127/orig -> origin/gh/ezyang/3127/orig 2025-08-14T21:24:05.9732439Z * [new branch] gh/ezyang/3128/base -> origin/gh/ezyang/3128/base 2025-08-14T21:24:05.9732578Z * [new branch] gh/ezyang/3128/head -> origin/gh/ezyang/3128/head 2025-08-14T21:24:05.9732716Z * [new branch] gh/ezyang/3128/orig -> origin/gh/ezyang/3128/orig 2025-08-14T21:24:05.9732867Z * [new branch] gh/ezyang/3129/base -> origin/gh/ezyang/3129/base 2025-08-14T21:24:05.9733011Z * [new branch] gh/ezyang/3129/head -> origin/gh/ezyang/3129/head 2025-08-14T21:24:05.9733160Z * [new branch] gh/ezyang/3129/orig -> origin/gh/ezyang/3129/orig 2025-08-14T21:24:05.9733303Z * [new branch] gh/ezyang/3130/base -> origin/gh/ezyang/3130/base 2025-08-14T21:24:05.9733519Z * [new branch] gh/ezyang/3130/head -> origin/gh/ezyang/3130/head 2025-08-14T21:24:05.9735013Z * [new branch] gh/ezyang/3130/orig -> origin/gh/ezyang/3130/orig 2025-08-14T21:24:05.9735502Z * [new branch] gh/ezyang/3131/base -> origin/gh/ezyang/3131/base 2025-08-14T21:24:05.9736026Z * [new branch] gh/ezyang/3131/head -> origin/gh/ezyang/3131/head 2025-08-14T21:24:05.9736929Z * [new branch] gh/ezyang/3131/orig -> origin/gh/ezyang/3131/orig 2025-08-14T21:24:05.9737930Z * [new branch] gh/ezyang/3132/base -> origin/gh/ezyang/3132/base 2025-08-14T21:24:05.9738174Z * [new branch] gh/ezyang/3132/head -> origin/gh/ezyang/3132/head 2025-08-14T21:24:05.9739327Z * [new branch] gh/ezyang/3132/orig -> origin/gh/ezyang/3132/orig 2025-08-14T21:24:05.9740189Z * [new branch] gh/ezyang/3133/base -> origin/gh/ezyang/3133/base 2025-08-14T21:24:05.9740692Z * [new branch] gh/ezyang/3133/head -> origin/gh/ezyang/3133/head 2025-08-14T21:24:05.9741666Z * [new branch] gh/ezyang/3133/orig -> origin/gh/ezyang/3133/orig 2025-08-14T21:24:05.9742642Z * [new branch] gh/ezyang/3134/base -> origin/gh/ezyang/3134/base 2025-08-14T21:24:05.9743317Z * [new branch] gh/ezyang/3134/head -> origin/gh/ezyang/3134/head 2025-08-14T21:24:05.9743912Z * [new branch] gh/ezyang/3134/orig -> origin/gh/ezyang/3134/orig 2025-08-14T21:24:05.9745306Z * [new branch] gh/ezyang/3135/base -> origin/gh/ezyang/3135/base 2025-08-14T21:24:05.9745624Z * [new branch] gh/ezyang/3135/head -> origin/gh/ezyang/3135/head 2025-08-14T21:24:05.9746618Z * [new branch] gh/ezyang/3135/orig -> origin/gh/ezyang/3135/orig 2025-08-14T21:24:05.9747228Z * [new branch] gh/ezyang/3136/base -> origin/gh/ezyang/3136/base 2025-08-14T21:24:05.9747938Z * [new branch] gh/ezyang/3136/head -> origin/gh/ezyang/3136/head 2025-08-14T21:24:05.9749023Z * [new branch] gh/ezyang/3136/orig -> origin/gh/ezyang/3136/orig 2025-08-14T21:24:05.9750141Z * [new branch] gh/fadara01/1/base -> origin/gh/fadara01/1/base 2025-08-14T21:24:05.9750509Z * [new branch] gh/fadara01/1/head -> origin/gh/fadara01/1/head 2025-08-14T21:24:05.9751826Z * [new branch] gh/fadara01/1/orig -> origin/gh/fadara01/1/orig 2025-08-14T21:24:05.9752887Z * [new branch] gh/fduwjj/168/base -> origin/gh/fduwjj/168/base 2025-08-14T21:24:05.9753311Z * [new branch] gh/fduwjj/168/head -> origin/gh/fduwjj/168/head 2025-08-14T21:24:05.9754618Z * [new branch] gh/fduwjj/168/orig -> origin/gh/fduwjj/168/orig 2025-08-14T21:24:05.9755102Z * [new branch] gh/fduwjj/169/base -> origin/gh/fduwjj/169/base 2025-08-14T21:24:05.9758821Z * [new branch] gh/fduwjj/169/head -> origin/gh/fduwjj/169/head 2025-08-14T21:24:05.9759458Z * [new branch] gh/fduwjj/169/orig -> origin/gh/fduwjj/169/orig 2025-08-14T21:24:05.9763825Z * [new branch] gh/fduwjj/170/base -> origin/gh/fduwjj/170/base 2025-08-14T21:24:05.9764114Z * [new branch] gh/fduwjj/170/head -> origin/gh/fduwjj/170/head 2025-08-14T21:24:05.9769429Z * [new branch] gh/fduwjj/170/orig -> origin/gh/fduwjj/170/orig 2025-08-14T21:24:05.9773523Z * [new branch] gh/fduwjj/171/base -> origin/gh/fduwjj/171/base 2025-08-14T21:24:05.9775956Z * [new branch] gh/fduwjj/171/head -> origin/gh/fduwjj/171/head 2025-08-14T21:24:05.9776279Z * [new branch] gh/fduwjj/171/orig -> origin/gh/fduwjj/171/orig 2025-08-14T21:24:05.9776467Z * [new branch] gh/fduwjj/172/base -> origin/gh/fduwjj/172/base 2025-08-14T21:24:05.9776680Z * [new branch] gh/fduwjj/172/head -> origin/gh/fduwjj/172/head 2025-08-14T21:24:05.9776961Z * [new branch] gh/fduwjj/172/orig -> origin/gh/fduwjj/172/orig 2025-08-14T21:24:05.9777138Z * [new branch] gh/fduwjj/173/base -> origin/gh/fduwjj/173/base 2025-08-14T21:24:05.9777289Z * [new branch] gh/fduwjj/173/head -> origin/gh/fduwjj/173/head 2025-08-14T21:24:05.9777566Z * [new branch] gh/fduwjj/173/orig -> origin/gh/fduwjj/173/orig 2025-08-14T21:24:05.9777732Z * [new branch] gh/fduwjj/174/base -> origin/gh/fduwjj/174/base 2025-08-14T21:24:05.9778359Z * [new branch] gh/fduwjj/174/head -> origin/gh/fduwjj/174/head 2025-08-14T21:24:05.9778551Z * [new branch] gh/fduwjj/174/orig -> origin/gh/fduwjj/174/orig 2025-08-14T21:24:05.9778712Z * [new branch] gh/fduwjj/175/base -> origin/gh/fduwjj/175/base 2025-08-14T21:24:05.9779135Z * [new branch] gh/fduwjj/175/head -> origin/gh/fduwjj/175/head 2025-08-14T21:24:05.9779293Z * [new branch] gh/fduwjj/175/orig -> origin/gh/fduwjj/175/orig 2025-08-14T21:24:05.9779444Z * [new branch] gh/fduwjj/176/base -> origin/gh/fduwjj/176/base 2025-08-14T21:24:05.9779594Z * [new branch] gh/fduwjj/176/head -> origin/gh/fduwjj/176/head 2025-08-14T21:24:05.9779925Z * [new branch] gh/fduwjj/176/orig -> origin/gh/fduwjj/176/orig 2025-08-14T21:24:05.9780164Z * [new branch] gh/fduwjj/177/base -> origin/gh/fduwjj/177/base 2025-08-14T21:24:05.9780380Z * [new branch] gh/fduwjj/177/head -> origin/gh/fduwjj/177/head 2025-08-14T21:24:05.9780606Z * [new branch] gh/fduwjj/177/orig -> origin/gh/fduwjj/177/orig 2025-08-14T21:24:05.9780832Z * [new branch] gh/fduwjj/178/base -> origin/gh/fduwjj/178/base 2025-08-14T21:24:05.9781161Z * [new branch] gh/fduwjj/178/head -> origin/gh/fduwjj/178/head 2025-08-14T21:24:05.9781384Z * [new branch] gh/fduwjj/178/orig -> origin/gh/fduwjj/178/orig 2025-08-14T21:24:05.9781608Z * [new branch] gh/fduwjj/179/base -> origin/gh/fduwjj/179/base 2025-08-14T21:24:05.9781866Z * [new branch] gh/fduwjj/179/head -> origin/gh/fduwjj/179/head 2025-08-14T21:24:05.9783988Z * [new branch] gh/fduwjj/179/orig -> origin/gh/fduwjj/179/orig 2025-08-14T21:24:05.9784290Z * [new branch] gh/fduwjj/180/base -> origin/gh/fduwjj/180/base 2025-08-14T21:24:05.9789172Z * [new branch] gh/fduwjj/180/head -> origin/gh/fduwjj/180/head 2025-08-14T21:24:05.9791423Z * [new branch] gh/fduwjj/180/orig -> origin/gh/fduwjj/180/orig 2025-08-14T21:24:05.9791704Z * [new branch] gh/fduwjj/181/base -> origin/gh/fduwjj/181/base 2025-08-14T21:24:05.9796900Z * [new branch] gh/fduwjj/181/head -> origin/gh/fduwjj/181/head 2025-08-14T21:24:05.9801303Z * [new branch] gh/fduwjj/181/orig -> origin/gh/fduwjj/181/orig 2025-08-14T21:24:05.9803058Z * [new branch] gh/fegin/306/base -> origin/gh/fegin/306/base 2025-08-14T21:24:05.9803239Z * [new branch] gh/fegin/306/head -> origin/gh/fegin/306/head 2025-08-14T21:24:05.9803680Z * [new branch] gh/fegin/306/orig -> origin/gh/fegin/306/orig 2025-08-14T21:24:05.9803935Z * [new branch] gh/fegin/307/base -> origin/gh/fegin/307/base 2025-08-14T21:24:05.9804082Z * [new branch] gh/fegin/307/head -> origin/gh/fegin/307/head 2025-08-14T21:24:05.9804215Z * [new branch] gh/fegin/307/orig -> origin/gh/fegin/307/orig 2025-08-14T21:24:05.9804364Z * [new branch] gh/fffrog/114/base -> origin/gh/fffrog/114/base 2025-08-14T21:24:05.9804521Z * [new branch] gh/fffrog/114/head -> origin/gh/fffrog/114/head 2025-08-14T21:24:05.9804655Z * [new branch] gh/fffrog/114/orig -> origin/gh/fffrog/114/orig 2025-08-14T21:24:05.9804800Z * [new branch] gh/fffrog/117/base -> origin/gh/fffrog/117/base 2025-08-14T21:24:05.9804932Z * [new branch] gh/fffrog/117/head -> origin/gh/fffrog/117/head 2025-08-14T21:24:05.9805073Z * [new branch] gh/fffrog/117/orig -> origin/gh/fffrog/117/orig 2025-08-14T21:24:05.9805204Z * [new branch] gh/fffrog/119/base -> origin/gh/fffrog/119/base 2025-08-14T21:24:05.9805336Z * [new branch] gh/fffrog/119/head -> origin/gh/fffrog/119/head 2025-08-14T21:24:05.9805476Z * [new branch] gh/fffrog/119/orig -> origin/gh/fffrog/119/orig 2025-08-14T21:24:05.9805612Z * [new branch] gh/fffrog/120/base -> origin/gh/fffrog/120/base 2025-08-14T21:24:05.9805877Z * [new branch] gh/fffrog/120/head -> origin/gh/fffrog/120/head 2025-08-14T21:24:05.9806016Z * [new branch] gh/fffrog/120/orig -> origin/gh/fffrog/120/orig 2025-08-14T21:24:05.9806199Z * [new branch] gh/fffrog/121/base -> origin/gh/fffrog/121/base 2025-08-14T21:24:05.9806353Z * [new branch] gh/fffrog/121/head -> origin/gh/fffrog/121/head 2025-08-14T21:24:05.9808184Z * [new branch] gh/fffrog/121/orig -> origin/gh/fffrog/121/orig 2025-08-14T21:24:05.9808341Z * [new branch] gh/fffrog/122/base -> origin/gh/fffrog/122/base 2025-08-14T21:24:05.9808553Z * [new branch] gh/fffrog/122/head -> origin/gh/fffrog/122/head 2025-08-14T21:24:05.9808706Z * [new branch] gh/fffrog/122/orig -> origin/gh/fffrog/122/orig 2025-08-14T21:24:05.9809051Z * [new branch] gh/fffrog/123/base -> origin/gh/fffrog/123/base 2025-08-14T21:24:05.9814272Z * [new branch] gh/fffrog/123/head -> origin/gh/fffrog/123/head 2025-08-14T21:24:05.9814464Z * [new branch] gh/fffrog/123/orig -> origin/gh/fffrog/123/orig 2025-08-14T21:24:05.9814606Z * [new branch] gh/fffrog/124/base -> origin/gh/fffrog/124/base 2025-08-14T21:24:05.9814743Z * [new branch] gh/fffrog/124/head -> origin/gh/fffrog/124/head 2025-08-14T21:24:05.9814888Z * [new branch] gh/fffrog/124/orig -> origin/gh/fffrog/124/orig 2025-08-14T21:24:05.9815029Z * [new branch] gh/fffrog/125/base -> origin/gh/fffrog/125/base 2025-08-14T21:24:05.9815179Z * [new branch] gh/fffrog/125/head -> origin/gh/fffrog/125/head 2025-08-14T21:24:05.9815321Z * [new branch] gh/fffrog/125/orig -> origin/gh/fffrog/125/orig 2025-08-14T21:24:05.9815649Z * [new branch] gh/fffrog/126/base -> origin/gh/fffrog/126/base 2025-08-14T21:24:05.9815827Z * [new branch] gh/fffrog/126/head -> origin/gh/fffrog/126/head 2025-08-14T21:24:05.9816672Z * [new branch] gh/fffrog/126/orig -> origin/gh/fffrog/126/orig 2025-08-14T21:24:05.9817641Z * [new branch] gh/fffrog/127/base -> origin/gh/fffrog/127/base 2025-08-14T21:24:05.9818305Z * [new branch] gh/fffrog/127/head -> origin/gh/fffrog/127/head 2025-08-14T21:24:05.9819080Z * [new branch] gh/fffrog/127/orig -> origin/gh/fffrog/127/orig 2025-08-14T21:24:05.9820176Z * [new branch] gh/fffrog/128/base -> origin/gh/fffrog/128/base 2025-08-14T21:24:05.9820697Z * [new branch] gh/fffrog/128/head -> origin/gh/fffrog/128/head 2025-08-14T21:24:05.9821436Z * [new branch] gh/fffrog/128/orig -> origin/gh/fffrog/128/orig 2025-08-14T21:24:05.9823027Z * [new branch] gh/fffrog/129/base -> origin/gh/fffrog/129/base 2025-08-14T21:24:05.9823198Z * [new branch] gh/fffrog/129/head -> origin/gh/fffrog/129/head 2025-08-14T21:24:05.9823880Z * [new branch] gh/fffrog/129/orig -> origin/gh/fffrog/129/orig 2025-08-14T21:24:05.9828397Z * [new branch] gh/fffrog/130/base -> origin/gh/fffrog/130/base 2025-08-14T21:24:05.9828593Z * [new branch] gh/fffrog/130/head -> origin/gh/fffrog/130/head 2025-08-14T21:24:05.9828743Z * [new branch] gh/fffrog/130/orig -> origin/gh/fffrog/130/orig 2025-08-14T21:24:05.9828887Z * [new branch] gh/fffrog/131/base -> origin/gh/fffrog/131/base 2025-08-14T21:24:05.9829039Z * [new branch] gh/fffrog/131/head -> origin/gh/fffrog/131/head 2025-08-14T21:24:05.9829180Z * [new branch] gh/fffrog/131/orig -> origin/gh/fffrog/131/orig 2025-08-14T21:24:05.9829540Z * [new branch] gh/fffrog/132/base -> origin/gh/fffrog/132/base 2025-08-14T21:24:05.9830300Z * [new branch] gh/fffrog/132/head -> origin/gh/fffrog/132/head 2025-08-14T21:24:05.9830888Z * [new branch] gh/fffrog/132/orig -> origin/gh/fffrog/132/orig 2025-08-14T21:24:05.9835167Z * [new branch] gh/fffrog/133/base -> origin/gh/fffrog/133/base 2025-08-14T21:24:05.9835344Z * [new branch] gh/fffrog/133/head -> origin/gh/fffrog/133/head 2025-08-14T21:24:05.9835503Z * [new branch] gh/fffrog/133/orig -> origin/gh/fffrog/133/orig 2025-08-14T21:24:05.9835650Z * [new branch] gh/fffrog/134/base -> origin/gh/fffrog/134/base 2025-08-14T21:24:05.9835790Z * [new branch] gh/fffrog/134/head -> origin/gh/fffrog/134/head 2025-08-14T21:24:05.9836134Z * [new branch] gh/fffrog/134/orig -> origin/gh/fffrog/134/orig 2025-08-14T21:24:05.9837228Z * [new branch] gh/fffrog/135/base -> origin/gh/fffrog/135/base 2025-08-14T21:24:05.9837382Z * [new branch] gh/fffrog/135/head -> origin/gh/fffrog/135/head 2025-08-14T21:24:05.9839895Z * [new branch] gh/fffrog/135/orig -> origin/gh/fffrog/135/orig 2025-08-14T21:24:05.9840078Z * [new branch] gh/fffrog/136/base -> origin/gh/fffrog/136/base 2025-08-14T21:24:05.9840236Z * [new branch] gh/fffrog/136/head -> origin/gh/fffrog/136/head 2025-08-14T21:24:05.9840532Z * [new branch] gh/fffrog/136/orig -> origin/gh/fffrog/136/orig 2025-08-14T21:24:05.9841600Z * [new branch] gh/fffrog/137/base -> origin/gh/fffrog/137/base 2025-08-14T21:24:05.9841941Z * [new branch] gh/fffrog/137/head -> origin/gh/fffrog/137/head 2025-08-14T21:24:05.9844323Z * [new branch] gh/fffrog/137/orig -> origin/gh/fffrog/137/orig 2025-08-14T21:24:05.9844688Z * [new branch] gh/fffrog/138/base -> origin/gh/fffrog/138/base 2025-08-14T21:24:05.9844883Z * [new branch] gh/fffrog/138/head -> origin/gh/fffrog/138/head 2025-08-14T21:24:05.9845200Z * [new branch] gh/fffrog/138/orig -> origin/gh/fffrog/138/orig 2025-08-14T21:24:05.9847943Z * [new branch] gh/gmagogsfm/1/base -> origin/gh/gmagogsfm/1/base 2025-08-14T21:24:05.9848283Z * [new branch] gh/gmagogsfm/1/head -> origin/gh/gmagogsfm/1/head 2025-08-14T21:24:05.9848470Z * [new branch] gh/gmagogsfm/1/orig -> origin/gh/gmagogsfm/1/orig 2025-08-14T21:24:05.9850135Z * [new branch] gh/gmagogsfm/2/base -> origin/gh/gmagogsfm/2/base 2025-08-14T21:24:05.9852859Z * [new branch] gh/gmagogsfm/2/head -> origin/gh/gmagogsfm/2/head 2025-08-14T21:24:05.9853205Z * [new branch] gh/gmagogsfm/2/orig -> origin/gh/gmagogsfm/2/orig 2025-08-14T21:24:05.9853548Z * [new branch] gh/gmagogsfm/3/base -> origin/gh/gmagogsfm/3/base 2025-08-14T21:24:05.9853712Z * [new branch] gh/gmagogsfm/3/head -> origin/gh/gmagogsfm/3/head 2025-08-14T21:24:05.9853867Z * [new branch] gh/gmagogsfm/3/orig -> origin/gh/gmagogsfm/3/orig 2025-08-14T21:24:05.9854134Z * [new branch] gh/gmagogsfm/4/base -> origin/gh/gmagogsfm/4/base 2025-08-14T21:24:05.9859892Z * [new branch] gh/gmagogsfm/4/head -> origin/gh/gmagogsfm/4/head 2025-08-14T21:24:05.9860204Z * [new branch] gh/gmagogsfm/4/orig -> origin/gh/gmagogsfm/4/orig 2025-08-14T21:24:05.9866151Z * [new branch] gh/guangyey/130/base -> origin/gh/guangyey/130/base 2025-08-14T21:24:05.9868385Z * [new branch] gh/guangyey/130/head -> origin/gh/guangyey/130/head 2025-08-14T21:24:05.9871772Z * [new branch] gh/guangyey/130/orig -> origin/gh/guangyey/130/orig 2025-08-14T21:24:05.9872299Z * [new branch] gh/guangyey/133/base -> origin/gh/guangyey/133/base 2025-08-14T21:24:05.9877622Z * [new branch] gh/guangyey/133/head -> origin/gh/guangyey/133/head 2025-08-14T21:24:05.9879901Z * [new branch] gh/guangyey/133/orig -> origin/gh/guangyey/133/orig 2025-08-14T21:24:05.9880176Z * [new branch] gh/guangyey/134/base -> origin/gh/guangyey/134/base 2025-08-14T21:24:05.9885635Z * [new branch] gh/guangyey/134/head -> origin/gh/guangyey/134/head 2025-08-14T21:24:05.9885984Z * [new branch] gh/guangyey/134/orig -> origin/gh/guangyey/134/orig 2025-08-14T21:24:05.9886170Z * [new branch] gh/guangyey/135/base -> origin/gh/guangyey/135/base 2025-08-14T21:24:05.9886336Z * [new branch] gh/guangyey/135/head -> origin/gh/guangyey/135/head 2025-08-14T21:24:05.9886773Z * [new branch] gh/guangyey/135/orig -> origin/gh/guangyey/135/orig 2025-08-14T21:24:05.9891719Z * [new branch] gh/guangyey/139/base -> origin/gh/guangyey/139/base 2025-08-14T21:24:05.9893997Z * [new branch] gh/guangyey/139/head -> origin/gh/guangyey/139/head 2025-08-14T21:24:05.9894194Z * [new branch] gh/guangyey/139/orig -> origin/gh/guangyey/139/orig 2025-08-14T21:24:05.9894343Z * [new branch] gh/guangyey/140/base -> origin/gh/guangyey/140/base 2025-08-14T21:24:05.9894494Z * [new branch] gh/guangyey/140/head -> origin/gh/guangyey/140/head 2025-08-14T21:24:05.9894637Z * [new branch] gh/guangyey/140/orig -> origin/gh/guangyey/140/orig 2025-08-14T21:24:05.9894779Z * [new branch] gh/guangyey/142/base -> origin/gh/guangyey/142/base 2025-08-14T21:24:05.9894933Z * [new branch] gh/guangyey/142/head -> origin/gh/guangyey/142/head 2025-08-14T21:24:05.9895108Z * [new branch] gh/guangyey/142/orig -> origin/gh/guangyey/142/orig 2025-08-14T21:24:05.9895254Z * [new branch] gh/guangyey/145/base -> origin/gh/guangyey/145/base 2025-08-14T21:24:05.9895411Z * [new branch] gh/guangyey/145/head -> origin/gh/guangyey/145/head 2025-08-14T21:24:05.9895549Z * [new branch] gh/guangyey/145/orig -> origin/gh/guangyey/145/orig 2025-08-14T21:24:05.9895695Z * [new branch] gh/guangyey/153/base -> origin/gh/guangyey/153/base 2025-08-14T21:24:05.9895834Z * [new branch] gh/guangyey/153/head -> origin/gh/guangyey/153/head 2025-08-14T21:24:05.9895979Z * [new branch] gh/guangyey/153/orig -> origin/gh/guangyey/153/orig 2025-08-14T21:24:05.9896117Z * [new branch] gh/guangyey/158/base -> origin/gh/guangyey/158/base 2025-08-14T21:24:05.9896259Z * [new branch] gh/guangyey/158/head -> origin/gh/guangyey/158/head 2025-08-14T21:24:05.9896419Z * [new branch] gh/guangyey/158/orig -> origin/gh/guangyey/158/orig 2025-08-14T21:24:05.9896561Z * [new branch] gh/guangyey/159/base -> origin/gh/guangyey/159/base 2025-08-14T21:24:05.9896713Z * [new branch] gh/guangyey/159/head -> origin/gh/guangyey/159/head 2025-08-14T21:24:05.9896854Z * [new branch] gh/guangyey/159/orig -> origin/gh/guangyey/159/orig 2025-08-14T21:24:05.9896997Z * [new branch] gh/guangyey/163/base -> origin/gh/guangyey/163/base 2025-08-14T21:24:05.9897147Z * [new branch] gh/guangyey/163/head -> origin/gh/guangyey/163/head 2025-08-14T21:24:05.9897289Z * [new branch] gh/guangyey/163/orig -> origin/gh/guangyey/163/orig 2025-08-14T21:24:05.9897439Z * [new branch] gh/guangyey/165/base -> origin/gh/guangyey/165/base 2025-08-14T21:24:05.9897581Z * [new branch] gh/guangyey/165/head -> origin/gh/guangyey/165/head 2025-08-14T21:24:05.9897868Z * [new branch] gh/guangyey/165/orig -> origin/gh/guangyey/165/orig 2025-08-14T21:24:05.9898026Z * [new branch] gh/guangyey/168/base -> origin/gh/guangyey/168/base 2025-08-14T21:24:05.9898168Z * [new branch] gh/guangyey/168/head -> origin/gh/guangyey/168/head 2025-08-14T21:24:05.9898311Z * [new branch] gh/guangyey/168/orig -> origin/gh/guangyey/168/orig 2025-08-14T21:24:05.9898462Z * [new branch] gh/guangyey/169/base -> origin/gh/guangyey/169/base 2025-08-14T21:24:05.9898606Z * [new branch] gh/guangyey/169/head -> origin/gh/guangyey/169/head 2025-08-14T21:24:05.9898763Z * [new branch] gh/guangyey/169/orig -> origin/gh/guangyey/169/orig 2025-08-14T21:24:05.9898908Z * [new branch] gh/guangyey/170/base -> origin/gh/guangyey/170/base 2025-08-14T21:24:05.9899120Z * [new branch] gh/guangyey/170/head -> origin/gh/guangyey/170/head 2025-08-14T21:24:05.9899274Z * [new branch] gh/guangyey/170/orig -> origin/gh/guangyey/170/orig 2025-08-14T21:24:05.9899417Z * [new branch] gh/guangyey/171/base -> origin/gh/guangyey/171/base 2025-08-14T21:24:05.9899577Z * [new branch] gh/guangyey/171/head -> origin/gh/guangyey/171/head 2025-08-14T21:24:05.9900014Z * [new branch] gh/guangyey/171/orig -> origin/gh/guangyey/171/orig 2025-08-14T21:24:05.9900189Z * [new branch] gh/guangyey/172/base -> origin/gh/guangyey/172/base 2025-08-14T21:24:05.9900346Z * [new branch] gh/guangyey/172/head -> origin/gh/guangyey/172/head 2025-08-14T21:24:05.9900497Z * [new branch] gh/guangyey/172/orig -> origin/gh/guangyey/172/orig 2025-08-14T21:24:05.9900647Z * [new branch] gh/guangyey/173/base -> origin/gh/guangyey/173/base 2025-08-14T21:24:05.9900795Z * [new branch] gh/guangyey/173/head -> origin/gh/guangyey/173/head 2025-08-14T21:24:05.9900933Z * [new branch] gh/guangyey/173/orig -> origin/gh/guangyey/173/orig 2025-08-14T21:24:05.9901095Z * [new branch] gh/guangyey/174/base -> origin/gh/guangyey/174/base 2025-08-14T21:24:05.9901240Z * [new branch] gh/guangyey/174/head -> origin/gh/guangyey/174/head 2025-08-14T21:24:05.9901388Z * [new branch] gh/guangyey/174/orig -> origin/gh/guangyey/174/orig 2025-08-14T21:24:05.9905415Z * [new branch] gh/guangyey/175/base -> origin/gh/guangyey/175/base 2025-08-14T21:24:05.9909723Z * [new branch] gh/guangyey/175/head -> origin/gh/guangyey/175/head 2025-08-14T21:24:05.9914597Z * [new branch] gh/guangyey/175/orig -> origin/gh/guangyey/175/orig 2025-08-14T21:24:05.9916530Z * [new branch] gh/guangyey/176/base -> origin/gh/guangyey/176/base 2025-08-14T21:24:05.9917023Z * [new branch] gh/guangyey/176/head -> origin/gh/guangyey/176/head 2025-08-14T21:24:05.9917200Z * [new branch] gh/guangyey/176/orig -> origin/gh/guangyey/176/orig 2025-08-14T21:24:05.9917355Z * [new branch] gh/guangyey/177/base -> origin/gh/guangyey/177/base 2025-08-14T21:24:05.9917507Z * [new branch] gh/guangyey/177/head -> origin/gh/guangyey/177/head 2025-08-14T21:24:05.9917667Z * [new branch] gh/guangyey/177/orig -> origin/gh/guangyey/177/orig 2025-08-14T21:24:05.9917816Z * [new branch] gh/guangyey/178/base -> origin/gh/guangyey/178/base 2025-08-14T21:24:05.9917971Z * [new branch] gh/guangyey/178/head -> origin/gh/guangyey/178/head 2025-08-14T21:24:05.9918121Z * [new branch] gh/guangyey/178/orig -> origin/gh/guangyey/178/orig 2025-08-14T21:24:05.9918270Z * [new branch] gh/guangyey/179/base -> origin/gh/guangyey/179/base 2025-08-14T21:24:05.9918636Z * [new branch] gh/guangyey/179/head -> origin/gh/guangyey/179/head 2025-08-14T21:24:05.9918790Z * [new branch] gh/guangyey/179/orig -> origin/gh/guangyey/179/orig 2025-08-14T21:24:05.9918947Z * [new branch] gh/guangyey/180/base -> origin/gh/guangyey/180/base 2025-08-14T21:24:05.9919096Z * [new branch] gh/guangyey/180/head -> origin/gh/guangyey/180/head 2025-08-14T21:24:05.9919245Z * [new branch] gh/guangyey/180/orig -> origin/gh/guangyey/180/orig 2025-08-14T21:24:05.9919413Z * [new branch] gh/guangyey/181/base -> origin/gh/guangyey/181/base 2025-08-14T21:24:05.9919561Z * [new branch] gh/guangyey/181/head -> origin/gh/guangyey/181/head 2025-08-14T21:24:05.9919718Z * [new branch] gh/guangyey/181/orig -> origin/gh/guangyey/181/orig 2025-08-14T21:24:05.9919952Z * [new branch] gh/guangyey/182/base -> origin/gh/guangyey/182/base 2025-08-14T21:24:05.9924412Z * [new branch] gh/guangyey/182/head -> origin/gh/guangyey/182/head 2025-08-14T21:24:05.9928715Z * [new branch] gh/guangyey/182/orig -> origin/gh/guangyey/182/orig 2025-08-14T21:24:05.9932812Z * [new branch] gh/guangyey/183/base -> origin/gh/guangyey/183/base 2025-08-14T21:24:05.9934981Z * [new branch] gh/guangyey/183/head -> origin/gh/guangyey/183/head 2025-08-14T21:24:05.9935151Z * [new branch] gh/guangyey/183/orig -> origin/gh/guangyey/183/orig 2025-08-14T21:24:05.9935302Z * [new branch] gh/guangyey/184/base -> origin/gh/guangyey/184/base 2025-08-14T21:24:05.9935474Z * [new branch] gh/guangyey/184/head -> origin/gh/guangyey/184/head 2025-08-14T21:24:05.9935622Z * [new branch] gh/guangyey/184/orig -> origin/gh/guangyey/184/orig 2025-08-14T21:24:05.9935788Z * [new branch] gh/guangyey/185/base -> origin/gh/guangyey/185/base 2025-08-14T21:24:05.9935927Z * [new branch] gh/guangyey/185/head -> origin/gh/guangyey/185/head 2025-08-14T21:24:05.9936076Z * [new branch] gh/guangyey/185/orig -> origin/gh/guangyey/185/orig 2025-08-14T21:24:05.9936223Z * [new branch] gh/guangyey/79/base -> origin/gh/guangyey/79/base 2025-08-14T21:24:05.9936363Z * [new branch] gh/guangyey/79/head -> origin/gh/guangyey/79/head 2025-08-14T21:24:05.9936506Z * [new branch] gh/guangyey/79/orig -> origin/gh/guangyey/79/orig 2025-08-14T21:24:05.9936652Z * [new branch] gh/guangyey/89/base -> origin/gh/guangyey/89/base 2025-08-14T21:24:05.9936795Z * [new branch] gh/guangyey/89/head -> origin/gh/guangyey/89/head 2025-08-14T21:24:05.9936946Z * [new branch] gh/guangyey/89/orig -> origin/gh/guangyey/89/orig 2025-08-14T21:24:05.9937135Z * [new branch] gh/guilhermeleobas/107/base -> origin/gh/guilhermeleobas/107/base 2025-08-14T21:24:05.9937310Z * [new branch] gh/guilhermeleobas/107/head -> origin/gh/guilhermeleobas/107/head 2025-08-14T21:24:05.9937476Z * [new branch] gh/guilhermeleobas/107/orig -> origin/gh/guilhermeleobas/107/orig 2025-08-14T21:24:05.9937640Z * [new branch] gh/guilhermeleobas/108/base -> origin/gh/guilhermeleobas/108/base 2025-08-14T21:24:05.9937809Z * [new branch] gh/guilhermeleobas/108/head -> origin/gh/guilhermeleobas/108/head 2025-08-14T21:24:05.9937971Z * [new branch] gh/guilhermeleobas/108/orig -> origin/gh/guilhermeleobas/108/orig 2025-08-14T21:24:05.9938141Z * [new branch] gh/guilhermeleobas/124/base -> origin/gh/guilhermeleobas/124/base 2025-08-14T21:24:05.9938312Z * [new branch] gh/guilhermeleobas/124/head -> origin/gh/guilhermeleobas/124/head 2025-08-14T21:24:05.9938607Z * [new branch] gh/guilhermeleobas/124/orig -> origin/gh/guilhermeleobas/124/orig 2025-08-14T21:24:05.9938796Z * [new branch] gh/guilhermeleobas/147/base -> origin/gh/guilhermeleobas/147/base 2025-08-14T21:24:05.9938963Z * [new branch] gh/guilhermeleobas/147/head -> origin/gh/guilhermeleobas/147/head 2025-08-14T21:24:05.9939542Z * [new branch] gh/guilhermeleobas/147/orig -> origin/gh/guilhermeleobas/147/orig 2025-08-14T21:24:05.9943461Z * [new branch] gh/guilhermeleobas/150/base -> origin/gh/guilhermeleobas/150/base 2025-08-14T21:24:05.9943902Z * [new branch] gh/guilhermeleobas/150/head -> origin/gh/guilhermeleobas/150/head 2025-08-14T21:24:05.9946624Z * [new branch] gh/guilhermeleobas/150/orig -> origin/gh/guilhermeleobas/150/orig 2025-08-14T21:24:05.9946811Z * [new branch] gh/guilhermeleobas/163/base -> origin/gh/guilhermeleobas/163/base 2025-08-14T21:24:05.9947161Z * [new branch] gh/guilhermeleobas/163/head -> origin/gh/guilhermeleobas/163/head 2025-08-14T21:24:05.9947335Z * [new branch] gh/guilhermeleobas/163/orig -> origin/gh/guilhermeleobas/163/orig 2025-08-14T21:24:05.9947506Z * [new branch] gh/guilhermeleobas/164/base -> origin/gh/guilhermeleobas/164/base 2025-08-14T21:24:05.9947756Z * [new branch] gh/guilhermeleobas/164/head -> origin/gh/guilhermeleobas/164/head 2025-08-14T21:24:05.9947970Z * [new branch] gh/guilhermeleobas/164/orig -> origin/gh/guilhermeleobas/164/orig 2025-08-14T21:24:05.9948147Z * [new branch] gh/guilhermeleobas/165/base -> origin/gh/guilhermeleobas/165/base 2025-08-14T21:24:05.9950959Z * [new branch] gh/guilhermeleobas/165/head -> origin/gh/guilhermeleobas/165/head 2025-08-14T21:24:05.9951261Z * [new branch] gh/guilhermeleobas/165/orig -> origin/gh/guilhermeleobas/165/orig 2025-08-14T21:24:05.9951466Z * [new branch] gh/guilhermeleobas/166/base -> origin/gh/guilhermeleobas/166/base 2025-08-14T21:24:05.9951673Z * [new branch] gh/guilhermeleobas/166/head -> origin/gh/guilhermeleobas/166/head 2025-08-14T21:24:05.9951850Z * [new branch] gh/guilhermeleobas/166/orig -> origin/gh/guilhermeleobas/166/orig 2025-08-14T21:24:05.9956904Z * [new branch] gh/guilhermeleobas/167/base -> origin/gh/guilhermeleobas/167/base 2025-08-14T21:24:05.9957120Z * [new branch] gh/guilhermeleobas/167/head -> origin/gh/guilhermeleobas/167/head 2025-08-14T21:24:05.9957321Z * [new branch] gh/guilhermeleobas/167/orig -> origin/gh/guilhermeleobas/167/orig 2025-08-14T21:24:05.9957495Z * [new branch] gh/guilhermeleobas/168/base -> origin/gh/guilhermeleobas/168/base 2025-08-14T21:24:05.9957676Z * [new branch] gh/guilhermeleobas/168/head -> origin/gh/guilhermeleobas/168/head 2025-08-14T21:24:05.9957853Z * [new branch] gh/guilhermeleobas/168/orig -> origin/gh/guilhermeleobas/168/orig 2025-08-14T21:24:05.9962482Z * [new branch] gh/guilhermeleobas/169/base -> origin/gh/guilhermeleobas/169/base 2025-08-14T21:24:05.9962687Z * [new branch] gh/guilhermeleobas/169/head -> origin/gh/guilhermeleobas/169/head 2025-08-14T21:24:05.9962862Z * [new branch] gh/guilhermeleobas/169/orig -> origin/gh/guilhermeleobas/169/orig 2025-08-14T21:24:05.9963039Z * [new branch] gh/guilhermeleobas/170/base -> origin/gh/guilhermeleobas/170/base 2025-08-14T21:24:05.9963201Z * [new branch] gh/guilhermeleobas/170/head -> origin/gh/guilhermeleobas/170/head 2025-08-14T21:24:05.9963371Z * [new branch] gh/guilhermeleobas/170/orig -> origin/gh/guilhermeleobas/170/orig 2025-08-14T21:24:05.9963700Z * [new branch] gh/guilhermeleobas/171/base -> origin/gh/guilhermeleobas/171/base 2025-08-14T21:24:05.9963880Z * [new branch] gh/guilhermeleobas/171/head -> origin/gh/guilhermeleobas/171/head 2025-08-14T21:24:05.9964370Z * [new branch] gh/guilhermeleobas/171/orig -> origin/gh/guilhermeleobas/171/orig 2025-08-14T21:24:05.9965068Z * [new branch] gh/guilhermeleobas/173/base -> origin/gh/guilhermeleobas/173/base 2025-08-14T21:24:05.9965291Z * [new branch] gh/guilhermeleobas/173/head -> origin/gh/guilhermeleobas/173/head 2025-08-14T21:24:05.9965473Z * [new branch] gh/guilhermeleobas/173/orig -> origin/gh/guilhermeleobas/173/orig 2025-08-14T21:24:05.9965659Z * [new branch] gh/guilhermeleobas/181/base -> origin/gh/guilhermeleobas/181/base 2025-08-14T21:24:05.9965833Z * [new branch] gh/guilhermeleobas/181/head -> origin/gh/guilhermeleobas/181/head 2025-08-14T21:24:05.9966175Z * [new branch] gh/guilhermeleobas/181/orig -> origin/gh/guilhermeleobas/181/orig 2025-08-14T21:24:05.9968760Z * [new branch] gh/guilhermeleobas/182/base -> origin/gh/guilhermeleobas/182/base 2025-08-14T21:24:05.9974363Z * [new branch] gh/guilhermeleobas/182/head -> origin/gh/guilhermeleobas/182/head 2025-08-14T21:24:05.9976845Z * [new branch] gh/guilhermeleobas/182/orig -> origin/gh/guilhermeleobas/182/orig 2025-08-14T21:24:05.9977431Z * [new branch] gh/guilhermeleobas/183/base -> origin/gh/guilhermeleobas/183/base 2025-08-14T21:24:05.9977650Z * [new branch] gh/guilhermeleobas/183/head -> origin/gh/guilhermeleobas/183/head 2025-08-14T21:24:05.9977824Z * [new branch] gh/guilhermeleobas/183/orig -> origin/gh/guilhermeleobas/183/orig 2025-08-14T21:24:05.9977992Z * [new branch] gh/guilhermeleobas/184/base -> origin/gh/guilhermeleobas/184/base 2025-08-14T21:24:05.9978171Z * [new branch] gh/guilhermeleobas/184/head -> origin/gh/guilhermeleobas/184/head 2025-08-14T21:24:05.9978341Z * [new branch] gh/guilhermeleobas/184/orig -> origin/gh/guilhermeleobas/184/orig 2025-08-14T21:24:05.9978521Z * [new branch] gh/guilhermeleobas/185/base -> origin/gh/guilhermeleobas/185/base 2025-08-14T21:24:05.9978736Z * [new branch] gh/guilhermeleobas/185/head -> origin/gh/guilhermeleobas/185/head 2025-08-14T21:24:05.9978906Z * [new branch] gh/guilhermeleobas/185/orig -> origin/gh/guilhermeleobas/185/orig 2025-08-14T21:24:05.9979083Z * [new branch] gh/guilhermeleobas/188/base -> origin/gh/guilhermeleobas/188/base 2025-08-14T21:24:05.9979250Z * [new branch] gh/guilhermeleobas/188/head -> origin/gh/guilhermeleobas/188/head 2025-08-14T21:24:05.9979466Z * [new branch] gh/guilhermeleobas/188/orig -> origin/gh/guilhermeleobas/188/orig 2025-08-14T21:24:05.9979637Z * [new branch] gh/guilhermeleobas/189/base -> origin/gh/guilhermeleobas/189/base 2025-08-14T21:24:05.9980038Z * [new branch] gh/guilhermeleobas/189/head -> origin/gh/guilhermeleobas/189/head 2025-08-14T21:24:05.9986251Z * [new branch] gh/guilhermeleobas/189/orig -> origin/gh/guilhermeleobas/189/orig 2025-08-14T21:24:05.9986464Z * [new branch] gh/guilhermeleobas/190/base -> origin/gh/guilhermeleobas/190/base 2025-08-14T21:24:05.9986669Z * [new branch] gh/guilhermeleobas/190/head -> origin/gh/guilhermeleobas/190/head 2025-08-14T21:24:05.9989829Z * [new branch] gh/guilhermeleobas/190/orig -> origin/gh/guilhermeleobas/190/orig 2025-08-14T21:24:05.9990106Z * [new branch] gh/guilhermeleobas/192/base -> origin/gh/guilhermeleobas/192/base 2025-08-14T21:24:05.9990763Z * [new branch] gh/guilhermeleobas/192/head -> origin/gh/guilhermeleobas/192/head 2025-08-14T21:24:05.9991063Z * [new branch] gh/guilhermeleobas/192/orig -> origin/gh/guilhermeleobas/192/orig 2025-08-14T21:24:05.9995471Z * [new branch] gh/guilhermeleobas/193/base -> origin/gh/guilhermeleobas/193/base 2025-08-14T21:24:05.9999824Z * [new branch] gh/guilhermeleobas/193/head -> origin/gh/guilhermeleobas/193/head 2025-08-14T21:24:06.0004416Z * [new branch] gh/guilhermeleobas/193/orig -> origin/gh/guilhermeleobas/193/orig 2025-08-14T21:24:06.0009103Z * [new branch] gh/guilhermeleobas/194/base -> origin/gh/guilhermeleobas/194/base 2025-08-14T21:24:06.0009734Z * [new branch] gh/guilhermeleobas/194/head -> origin/gh/guilhermeleobas/194/head 2025-08-14T21:24:06.0009951Z * [new branch] gh/guilhermeleobas/194/orig -> origin/gh/guilhermeleobas/194/orig 2025-08-14T21:24:06.0010132Z * [new branch] gh/guilhermeleobas/203/base -> origin/gh/guilhermeleobas/203/base 2025-08-14T21:24:06.0010295Z * [new branch] gh/guilhermeleobas/203/head -> origin/gh/guilhermeleobas/203/head 2025-08-14T21:24:06.0010456Z * [new branch] gh/guilhermeleobas/203/orig -> origin/gh/guilhermeleobas/203/orig 2025-08-14T21:24:06.0010627Z * [new branch] gh/guilhermeleobas/204/base -> origin/gh/guilhermeleobas/204/base 2025-08-14T21:24:06.0010954Z * [new branch] gh/guilhermeleobas/204/head -> origin/gh/guilhermeleobas/204/head 2025-08-14T21:24:06.0011133Z * [new branch] gh/guilhermeleobas/204/orig -> origin/gh/guilhermeleobas/204/orig 2025-08-14T21:24:06.0011295Z * [new branch] gh/guilhermeleobas/205/base -> origin/gh/guilhermeleobas/205/base 2025-08-14T21:24:06.0011455Z * [new branch] gh/guilhermeleobas/205/head -> origin/gh/guilhermeleobas/205/head 2025-08-14T21:24:06.0011625Z * [new branch] gh/guilhermeleobas/205/orig -> origin/gh/guilhermeleobas/205/orig 2025-08-14T21:24:06.0011784Z * [new branch] gh/guilhermeleobas/206/base -> origin/gh/guilhermeleobas/206/base 2025-08-14T21:24:06.0011952Z * [new branch] gh/guilhermeleobas/206/head -> origin/gh/guilhermeleobas/206/head 2025-08-14T21:24:06.0012112Z * [new branch] gh/guilhermeleobas/206/orig -> origin/gh/guilhermeleobas/206/orig 2025-08-14T21:24:06.0012277Z * [new branch] gh/guilhermeleobas/207/base -> origin/gh/guilhermeleobas/207/base 2025-08-14T21:24:06.0012458Z * [new branch] gh/guilhermeleobas/207/head -> origin/gh/guilhermeleobas/207/head 2025-08-14T21:24:06.0012618Z * [new branch] gh/guilhermeleobas/207/orig -> origin/gh/guilhermeleobas/207/orig 2025-08-14T21:24:06.0012786Z * [new branch] gh/guilhermeleobas/208/base -> origin/gh/guilhermeleobas/208/base 2025-08-14T21:24:06.0012946Z * [new branch] gh/guilhermeleobas/208/head -> origin/gh/guilhermeleobas/208/head 2025-08-14T21:24:06.0013106Z * [new branch] gh/guilhermeleobas/208/orig -> origin/gh/guilhermeleobas/208/orig 2025-08-14T21:24:06.0013277Z * [new branch] gh/guilhermeleobas/209/base -> origin/gh/guilhermeleobas/209/base 2025-08-14T21:24:06.0013434Z * [new branch] gh/guilhermeleobas/209/head -> origin/gh/guilhermeleobas/209/head 2025-08-14T21:24:06.0013590Z * [new branch] gh/guilhermeleobas/209/orig -> origin/gh/guilhermeleobas/209/orig 2025-08-14T21:24:06.0013764Z * [new branch] gh/guilhermeleobas/210/base -> origin/gh/guilhermeleobas/210/base 2025-08-14T21:24:06.0013924Z * [new branch] gh/guilhermeleobas/210/head -> origin/gh/guilhermeleobas/210/head 2025-08-14T21:24:06.0014091Z * [new branch] gh/guilhermeleobas/210/orig -> origin/gh/guilhermeleobas/210/orig 2025-08-14T21:24:06.0014249Z * [new branch] gh/guilhermeleobas/211/base -> origin/gh/guilhermeleobas/211/base 2025-08-14T21:24:06.0014406Z * [new branch] gh/guilhermeleobas/211/head -> origin/gh/guilhermeleobas/211/head 2025-08-14T21:24:06.0014582Z * [new branch] gh/guilhermeleobas/211/orig -> origin/gh/guilhermeleobas/211/orig 2025-08-14T21:24:06.0014744Z * [new branch] gh/guilhermeleobas/212/base -> origin/gh/guilhermeleobas/212/base 2025-08-14T21:24:06.0014912Z * [new branch] gh/guilhermeleobas/212/head -> origin/gh/guilhermeleobas/212/head 2025-08-14T21:24:06.0015078Z * [new branch] gh/guilhermeleobas/212/orig -> origin/gh/guilhermeleobas/212/orig 2025-08-14T21:24:06.0015284Z * [new branch] gh/guilhermeleobas/213/base -> origin/gh/guilhermeleobas/213/base 2025-08-14T21:24:06.0015464Z * [new branch] gh/guilhermeleobas/213/head -> origin/gh/guilhermeleobas/213/head 2025-08-14T21:24:06.0015628Z * [new branch] gh/guilhermeleobas/213/orig -> origin/gh/guilhermeleobas/213/orig 2025-08-14T21:24:06.0018998Z * [new branch] gh/guilhermeleobas/214/base -> origin/gh/guilhermeleobas/214/base 2025-08-14T21:24:06.0019658Z * [new branch] gh/guilhermeleobas/214/head -> origin/gh/guilhermeleobas/214/head 2025-08-14T21:24:06.0020141Z * [new branch] gh/guilhermeleobas/214/orig -> origin/gh/guilhermeleobas/214/orig 2025-08-14T21:24:06.0020333Z * [new branch] gh/guilhermeleobas/215/base -> origin/gh/guilhermeleobas/215/base 2025-08-14T21:24:06.0020735Z * [new branch] gh/guilhermeleobas/215/head -> origin/gh/guilhermeleobas/215/head 2025-08-14T21:24:06.0020944Z * [new branch] gh/guilhermeleobas/215/orig -> origin/gh/guilhermeleobas/215/orig 2025-08-14T21:24:06.0024034Z * [new branch] gh/guilhermeleobas/216/base -> origin/gh/guilhermeleobas/216/base 2025-08-14T21:24:06.0024231Z * [new branch] gh/guilhermeleobas/216/head -> origin/gh/guilhermeleobas/216/head 2025-08-14T21:24:06.0024407Z * [new branch] gh/guilhermeleobas/216/orig -> origin/gh/guilhermeleobas/216/orig 2025-08-14T21:24:06.0024587Z * [new branch] gh/guilhermeleobas/217/base -> origin/gh/guilhermeleobas/217/base 2025-08-14T21:24:06.0024760Z * [new branch] gh/guilhermeleobas/217/head -> origin/gh/guilhermeleobas/217/head 2025-08-14T21:24:06.0024933Z * [new branch] gh/guilhermeleobas/217/orig -> origin/gh/guilhermeleobas/217/orig 2025-08-14T21:24:06.0025111Z * [new branch] gh/guilhermeleobas/218/base -> origin/gh/guilhermeleobas/218/base 2025-08-14T21:24:06.0025294Z * [new branch] gh/guilhermeleobas/218/head -> origin/gh/guilhermeleobas/218/head 2025-08-14T21:24:06.0030271Z * [new branch] gh/guilhermeleobas/218/orig -> origin/gh/guilhermeleobas/218/orig 2025-08-14T21:24:06.0030497Z * [new branch] gh/guilhermeleobas/219/base -> origin/gh/guilhermeleobas/219/base 2025-08-14T21:24:06.0030672Z * [new branch] gh/guilhermeleobas/219/head -> origin/gh/guilhermeleobas/219/head 2025-08-14T21:24:06.0030852Z * [new branch] gh/guilhermeleobas/219/orig -> origin/gh/guilhermeleobas/219/orig 2025-08-14T21:24:06.0031867Z * [new branch] gh/guilhermeleobas/220/base -> origin/gh/guilhermeleobas/220/base 2025-08-14T21:24:06.0032449Z * [new branch] gh/guilhermeleobas/220/head -> origin/gh/guilhermeleobas/220/head 2025-08-14T21:24:06.0032671Z * [new branch] gh/guilhermeleobas/220/orig -> origin/gh/guilhermeleobas/220/orig 2025-08-14T21:24:06.0032891Z * [new branch] gh/guilhermeleobas/221/base -> origin/gh/guilhermeleobas/221/base 2025-08-14T21:24:06.0033069Z * [new branch] gh/guilhermeleobas/221/head -> origin/gh/guilhermeleobas/221/head 2025-08-14T21:24:06.0033236Z * [new branch] gh/guilhermeleobas/221/orig -> origin/gh/guilhermeleobas/221/orig 2025-08-14T21:24:06.0037567Z * [new branch] gh/guilhermeleobas/222/base -> origin/gh/guilhermeleobas/222/base 2025-08-14T21:24:06.0037781Z * [new branch] gh/guilhermeleobas/222/head -> origin/gh/guilhermeleobas/222/head 2025-08-14T21:24:06.0037953Z * [new branch] gh/guilhermeleobas/222/orig -> origin/gh/guilhermeleobas/222/orig 2025-08-14T21:24:06.0038126Z * [new branch] gh/guilhermeleobas/223/base -> origin/gh/guilhermeleobas/223/base 2025-08-14T21:24:06.0038289Z * [new branch] gh/guilhermeleobas/223/head -> origin/gh/guilhermeleobas/223/head 2025-08-14T21:24:06.0038481Z * [new branch] gh/guilhermeleobas/223/orig -> origin/gh/guilhermeleobas/223/orig 2025-08-14T21:24:06.0038821Z * [new branch] gh/guilhermeleobas/224/base -> origin/gh/guilhermeleobas/224/base 2025-08-14T21:24:06.0038986Z * [new branch] gh/guilhermeleobas/224/head -> origin/gh/guilhermeleobas/224/head 2025-08-14T21:24:06.0039423Z * [new branch] gh/guilhermeleobas/224/orig -> origin/gh/guilhermeleobas/224/orig 2025-08-14T21:24:06.0039604Z * [new branch] gh/guilhermeleobas/225/base -> origin/gh/guilhermeleobas/225/base 2025-08-14T21:24:06.0040116Z * [new branch] gh/guilhermeleobas/225/head -> origin/gh/guilhermeleobas/225/head 2025-08-14T21:24:06.0040810Z * [new branch] gh/guilhermeleobas/225/orig -> origin/gh/guilhermeleobas/225/orig 2025-08-14T21:24:06.0045039Z * [new branch] gh/guilhermeleobas/226/base -> origin/gh/guilhermeleobas/226/base 2025-08-14T21:24:06.0045419Z * [new branch] gh/guilhermeleobas/226/head -> origin/gh/guilhermeleobas/226/head 2025-08-14T21:24:06.0046004Z * [new branch] gh/guilhermeleobas/226/orig -> origin/gh/guilhermeleobas/226/orig 2025-08-14T21:24:06.0050574Z * [new branch] gh/guilhermeleobas/227/base -> origin/gh/guilhermeleobas/227/base 2025-08-14T21:24:06.0052608Z * [new branch] gh/guilhermeleobas/227/head -> origin/gh/guilhermeleobas/227/head 2025-08-14T21:24:06.0052814Z * [new branch] gh/guilhermeleobas/227/orig -> origin/gh/guilhermeleobas/227/orig 2025-08-14T21:24:06.0053252Z * [new branch] gh/guilhermeleobas/228/base -> origin/gh/guilhermeleobas/228/base 2025-08-14T21:24:06.0053424Z * [new branch] gh/guilhermeleobas/228/head -> origin/gh/guilhermeleobas/228/head 2025-08-14T21:24:06.0053591Z * [new branch] gh/guilhermeleobas/228/orig -> origin/gh/guilhermeleobas/228/orig 2025-08-14T21:24:06.0057032Z * [new branch] gh/guilhermeleobas/229/base -> origin/gh/guilhermeleobas/229/base 2025-08-14T21:24:06.0057239Z * [new branch] gh/guilhermeleobas/229/head -> origin/gh/guilhermeleobas/229/head 2025-08-14T21:24:06.0057427Z * [new branch] gh/guilhermeleobas/229/orig -> origin/gh/guilhermeleobas/229/orig 2025-08-14T21:24:06.0063884Z * [new branch] gh/guilhermeleobas/230/base -> origin/gh/guilhermeleobas/230/base 2025-08-14T21:24:06.0067793Z * [new branch] gh/guilhermeleobas/230/head -> origin/gh/guilhermeleobas/230/head 2025-08-14T21:24:06.0071994Z * [new branch] gh/guilhermeleobas/230/orig -> origin/gh/guilhermeleobas/230/orig 2025-08-14T21:24:06.0076281Z * [new branch] gh/guilhermeleobas/231/base -> origin/gh/guilhermeleobas/231/base 2025-08-14T21:24:06.0080592Z * [new branch] gh/guilhermeleobas/231/head -> origin/gh/guilhermeleobas/231/head 2025-08-14T21:24:06.0085491Z * [new branch] gh/guilhermeleobas/231/orig -> origin/gh/guilhermeleobas/231/orig 2025-08-14T21:24:06.0087778Z * [new branch] gh/guilhermeleobas/232/base -> origin/gh/guilhermeleobas/232/base 2025-08-14T21:24:06.0087990Z * [new branch] gh/guilhermeleobas/232/head -> origin/gh/guilhermeleobas/232/head 2025-08-14T21:24:06.0088170Z * [new branch] gh/guilhermeleobas/232/orig -> origin/gh/guilhermeleobas/232/orig 2025-08-14T21:24:06.0088336Z * [new branch] gh/guilhermeleobas/233/base -> origin/gh/guilhermeleobas/233/base 2025-08-14T21:24:06.0088501Z * [new branch] gh/guilhermeleobas/233/head -> origin/gh/guilhermeleobas/233/head 2025-08-14T21:24:06.0088681Z * [new branch] gh/guilhermeleobas/233/orig -> origin/gh/guilhermeleobas/233/orig 2025-08-14T21:24:06.0088850Z * [new branch] gh/guilhermeleobas/73/base -> origin/gh/guilhermeleobas/73/base 2025-08-14T21:24:06.0089018Z * [new branch] gh/guilhermeleobas/73/head -> origin/gh/guilhermeleobas/73/head 2025-08-14T21:24:06.0089200Z * [new branch] gh/guilhermeleobas/73/orig -> origin/gh/guilhermeleobas/73/orig 2025-08-14T21:24:06.0089570Z * [new branch] gh/henrylhtsang/103/base -> origin/gh/henrylhtsang/103/base 2025-08-14T21:24:06.0089734Z * [new branch] gh/henrylhtsang/103/head -> origin/gh/henrylhtsang/103/head 2025-08-14T21:24:06.0089891Z * [new branch] gh/henrylhtsang/103/orig -> origin/gh/henrylhtsang/103/orig 2025-08-14T21:24:06.0090058Z * [new branch] gh/henrylhtsang/108/base -> origin/gh/henrylhtsang/108/base 2025-08-14T21:24:06.0090214Z * [new branch] gh/henrylhtsang/108/head -> origin/gh/henrylhtsang/108/head 2025-08-14T21:24:06.0090369Z * [new branch] gh/henrylhtsang/108/orig -> origin/gh/henrylhtsang/108/orig 2025-08-14T21:24:06.0090530Z * [new branch] gh/henrylhtsang/118/base -> origin/gh/henrylhtsang/118/base 2025-08-14T21:24:06.0090717Z * [new branch] gh/henrylhtsang/118/head -> origin/gh/henrylhtsang/118/head 2025-08-14T21:24:06.0092944Z * [new branch] gh/henrylhtsang/118/orig -> origin/gh/henrylhtsang/118/orig 2025-08-14T21:24:06.0093201Z * [new branch] gh/henrylhtsang/123/base -> origin/gh/henrylhtsang/123/base 2025-08-14T21:24:06.0093487Z * [new branch] gh/henrylhtsang/123/head -> origin/gh/henrylhtsang/123/head 2025-08-14T21:24:06.0093658Z * [new branch] gh/henrylhtsang/123/orig -> origin/gh/henrylhtsang/123/orig 2025-08-14T21:24:06.0098335Z * [new branch] gh/henrylhtsang/124/base -> origin/gh/henrylhtsang/124/base 2025-08-14T21:24:06.0098546Z * [new branch] gh/henrylhtsang/124/head -> origin/gh/henrylhtsang/124/head 2025-08-14T21:24:06.0098721Z * [new branch] gh/henrylhtsang/124/orig -> origin/gh/henrylhtsang/124/orig 2025-08-14T21:24:06.0098889Z * [new branch] gh/henrylhtsang/125/base -> origin/gh/henrylhtsang/125/base 2025-08-14T21:24:06.0099071Z * [new branch] gh/henrylhtsang/125/head -> origin/gh/henrylhtsang/125/head 2025-08-14T21:24:06.0099252Z * [new branch] gh/henrylhtsang/125/orig -> origin/gh/henrylhtsang/125/orig 2025-08-14T21:24:06.0099420Z * [new branch] gh/henrylhtsang/126/base -> origin/gh/henrylhtsang/126/base 2025-08-14T21:24:06.0100666Z * [new branch] gh/henrylhtsang/126/head -> origin/gh/henrylhtsang/126/head 2025-08-14T21:24:06.0104118Z * [new branch] gh/henrylhtsang/126/orig -> origin/gh/henrylhtsang/126/orig 2025-08-14T21:24:06.0104464Z * [new branch] gh/henrylhtsang/127/base -> origin/gh/henrylhtsang/127/base 2025-08-14T21:24:06.0104635Z * [new branch] gh/henrylhtsang/127/head -> origin/gh/henrylhtsang/127/head 2025-08-14T21:24:06.0105276Z * [new branch] gh/henrylhtsang/127/orig -> origin/gh/henrylhtsang/127/orig 2025-08-14T21:24:06.0106855Z * [new branch] gh/henrylhtsang/128/base -> origin/gh/henrylhtsang/128/base 2025-08-14T21:24:06.0110404Z * [new branch] gh/henrylhtsang/128/head -> origin/gh/henrylhtsang/128/head 2025-08-14T21:24:06.0110561Z * [new branch] gh/henrylhtsang/128/orig -> origin/gh/henrylhtsang/128/orig 2025-08-14T21:24:06.0110725Z * [new branch] gh/henrylhtsang/129/base -> origin/gh/henrylhtsang/129/base 2025-08-14T21:24:06.0110874Z * [new branch] gh/henrylhtsang/129/head -> origin/gh/henrylhtsang/129/head 2025-08-14T21:24:06.0114048Z * [new branch] gh/henrylhtsang/129/orig -> origin/gh/henrylhtsang/129/orig 2025-08-14T21:24:06.0114208Z * [new branch] gh/henrylhtsang/130/base -> origin/gh/henrylhtsang/130/base 2025-08-14T21:24:06.0114366Z * [new branch] gh/henrylhtsang/130/head -> origin/gh/henrylhtsang/130/head 2025-08-14T21:24:06.0114531Z * [new branch] gh/henrylhtsang/131/base -> origin/gh/henrylhtsang/131/base 2025-08-14T21:24:06.0117852Z * [new branch] gh/henrylhtsang/131/head -> origin/gh/henrylhtsang/131/head 2025-08-14T21:24:06.0118018Z * [new branch] gh/henrylhtsang/131/orig -> origin/gh/henrylhtsang/131/orig 2025-08-14T21:24:06.0118168Z * [new branch] gh/henrylhtsang/132/base -> origin/gh/henrylhtsang/132/base 2025-08-14T21:24:06.0118324Z * [new branch] gh/henrylhtsang/132/head -> origin/gh/henrylhtsang/132/head 2025-08-14T21:24:06.0118470Z * [new branch] gh/henrylhtsang/132/orig -> origin/gh/henrylhtsang/132/orig 2025-08-14T21:24:06.0122055Z * [new branch] gh/henrylhtsang/133/base -> origin/gh/henrylhtsang/133/base 2025-08-14T21:24:06.0122212Z * [new branch] gh/henrylhtsang/133/head -> origin/gh/henrylhtsang/133/head 2025-08-14T21:24:06.0122362Z * [new branch] gh/henrylhtsang/133/orig -> origin/gh/henrylhtsang/133/orig 2025-08-14T21:24:06.0122838Z * [new branch] gh/henrylhtsang/134/base -> origin/gh/henrylhtsang/134/base 2025-08-14T21:24:06.0123259Z * [new branch] gh/henrylhtsang/134/head -> origin/gh/henrylhtsang/134/head 2025-08-14T21:24:06.0123421Z * [new branch] gh/henrylhtsang/134/orig -> origin/gh/henrylhtsang/134/orig 2025-08-14T21:24:06.0123971Z * [new branch] gh/henrylhtsang/135/base -> origin/gh/henrylhtsang/135/base 2025-08-14T21:24:06.0125005Z * [new branch] gh/henrylhtsang/135/head -> origin/gh/henrylhtsang/135/head 2025-08-14T21:24:06.0125549Z * [new branch] gh/henrylhtsang/135/orig -> origin/gh/henrylhtsang/135/orig 2025-08-14T21:24:06.0127285Z * [new branch] gh/henrylhtsang/136/base -> origin/gh/henrylhtsang/136/base 2025-08-14T21:24:06.0127480Z * [new branch] gh/henrylhtsang/136/head -> origin/gh/henrylhtsang/136/head 2025-08-14T21:24:06.0128423Z * [new branch] gh/henrylhtsang/136/orig -> origin/gh/henrylhtsang/136/orig 2025-08-14T21:24:06.0130086Z * [new branch] gh/henrylhtsang/137/base -> origin/gh/henrylhtsang/137/base 2025-08-14T21:24:06.0130254Z * [new branch] gh/henrylhtsang/137/head -> origin/gh/henrylhtsang/137/head 2025-08-14T21:24:06.0131513Z * [new branch] gh/henrylhtsang/137/orig -> origin/gh/henrylhtsang/137/orig 2025-08-14T21:24:06.0132832Z * [new branch] gh/henrylhtsang/138/base -> origin/gh/henrylhtsang/138/base 2025-08-14T21:24:06.0133106Z * [new branch] gh/henrylhtsang/138/head -> origin/gh/henrylhtsang/138/head 2025-08-14T21:24:06.0133272Z * [new branch] gh/henrylhtsang/138/orig -> origin/gh/henrylhtsang/138/orig 2025-08-14T21:24:06.0134307Z * [new branch] gh/henrylhtsang/139/base -> origin/gh/henrylhtsang/139/base 2025-08-14T21:24:06.0134546Z * [new branch] gh/henrylhtsang/139/head -> origin/gh/henrylhtsang/139/head 2025-08-14T21:24:06.0135718Z * [new branch] gh/henrylhtsang/139/orig -> origin/gh/henrylhtsang/139/orig 2025-08-14T21:24:06.0136605Z * [new branch] gh/henrylhtsang/140/base -> origin/gh/henrylhtsang/140/base 2025-08-14T21:24:06.0137273Z * [new branch] gh/henrylhtsang/140/head -> origin/gh/henrylhtsang/140/head 2025-08-14T21:24:06.0137899Z * [new branch] gh/henrylhtsang/140/orig -> origin/gh/henrylhtsang/140/orig 2025-08-14T21:24:06.0139032Z * [new branch] gh/henrylhtsang/141/base -> origin/gh/henrylhtsang/141/base 2025-08-14T21:24:06.0139371Z * [new branch] gh/henrylhtsang/141/head -> origin/gh/henrylhtsang/141/head 2025-08-14T21:24:06.0140939Z * [new branch] gh/henrylhtsang/141/orig -> origin/gh/henrylhtsang/141/orig 2025-08-14T21:24:06.0142093Z * [new branch] gh/henrylhtsang/142/base -> origin/gh/henrylhtsang/142/base 2025-08-14T21:24:06.0142684Z * [new branch] gh/henrylhtsang/142/head -> origin/gh/henrylhtsang/142/head 2025-08-14T21:24:06.0144365Z * [new branch] gh/henrylhtsang/142/orig -> origin/gh/henrylhtsang/142/orig 2025-08-14T21:24:06.0144981Z * [new branch] gh/henrylhtsang/143/base -> origin/gh/henrylhtsang/143/base 2025-08-14T21:24:06.0145874Z * [new branch] gh/henrylhtsang/143/head -> origin/gh/henrylhtsang/143/head 2025-08-14T21:24:06.0146354Z * [new branch] gh/henrylhtsang/143/orig -> origin/gh/henrylhtsang/143/orig 2025-08-14T21:24:06.0147632Z * [new branch] gh/henrylhtsang/144/base -> origin/gh/henrylhtsang/144/base 2025-08-14T21:24:06.0147949Z * [new branch] gh/henrylhtsang/144/head -> origin/gh/henrylhtsang/144/head 2025-08-14T21:24:06.0149001Z * [new branch] gh/henrylhtsang/144/orig -> origin/gh/henrylhtsang/144/orig 2025-08-14T21:24:06.0149937Z * [new branch] gh/henrylhtsang/145/base -> origin/gh/henrylhtsang/145/base 2025-08-14T21:24:06.0150425Z * [new branch] gh/henrylhtsang/145/head -> origin/gh/henrylhtsang/145/head 2025-08-14T21:24:06.0152011Z * [new branch] gh/henrylhtsang/145/orig -> origin/gh/henrylhtsang/145/orig 2025-08-14T21:24:06.0152267Z * [new branch] gh/henrylhtsang/146/base -> origin/gh/henrylhtsang/146/base 2025-08-14T21:24:06.0155552Z * [new branch] gh/henrylhtsang/146/head -> origin/gh/henrylhtsang/146/head 2025-08-14T21:24:06.0155794Z * [new branch] gh/henrylhtsang/146/orig -> origin/gh/henrylhtsang/146/orig 2025-08-14T21:24:06.0155962Z * [new branch] gh/huydhn/1/head -> origin/gh/huydhn/1/head 2025-08-14T21:24:06.0156116Z * [new branch] gh/huydhn/1/next -> origin/gh/huydhn/1/next 2025-08-14T21:24:06.0156256Z * [new branch] gh/huydhn/2/head -> origin/gh/huydhn/2/head 2025-08-14T21:24:06.0161593Z * [new branch] gh/huydhn/2/next -> origin/gh/huydhn/2/next 2025-08-14T21:24:06.0161795Z * [new branch] gh/huydhn/2/orig -> origin/gh/huydhn/2/orig 2025-08-14T21:24:06.0161973Z * [new branch] gh/huydhn/3/head -> origin/gh/huydhn/3/head 2025-08-14T21:24:06.0162120Z * [new branch] gh/huydhn/3/next -> origin/gh/huydhn/3/next 2025-08-14T21:24:06.0162264Z * [new branch] gh/huydhn/3/orig -> origin/gh/huydhn/3/orig 2025-08-14T21:24:06.0162410Z * [new branch] gh/huydhn/4/head -> origin/gh/huydhn/4/head 2025-08-14T21:24:06.0166588Z * [new branch] gh/huydhn/4/next -> origin/gh/huydhn/4/next 2025-08-14T21:24:06.0166779Z * [new branch] gh/huydhn/4/orig -> origin/gh/huydhn/4/orig 2025-08-14T21:24:06.0166933Z * [new branch] gh/huydhn/5/head -> origin/gh/huydhn/5/head 2025-08-14T21:24:06.0167075Z * [new branch] gh/huydhn/5/next -> origin/gh/huydhn/5/next 2025-08-14T21:24:06.0167238Z * [new branch] gh/huydhn/5/orig -> origin/gh/huydhn/5/orig 2025-08-14T21:24:06.0167384Z * [new branch] gh/huydhn/6/head -> origin/gh/huydhn/6/head 2025-08-14T21:24:06.0167525Z * [new branch] gh/huydhn/6/next -> origin/gh/huydhn/6/next 2025-08-14T21:24:06.0167677Z * [new branch] gh/huydhn/6/orig -> origin/gh/huydhn/6/orig 2025-08-14T21:24:06.0167821Z * [new branch] gh/int3/97/base -> origin/gh/int3/97/base 2025-08-14T21:24:06.0167974Z * [new branch] gh/int3/97/head -> origin/gh/int3/97/head 2025-08-14T21:24:06.0168234Z * [new branch] gh/isuruf/101/base -> origin/gh/isuruf/101/base 2025-08-14T21:24:06.0169265Z * [new branch] gh/isuruf/101/head -> origin/gh/isuruf/101/head 2025-08-14T21:24:06.0172649Z * [new branch] gh/isuruf/116/base -> origin/gh/isuruf/116/base 2025-08-14T21:24:06.0172997Z * [new branch] gh/isuruf/116/head -> origin/gh/isuruf/116/head 2025-08-14T21:24:06.0173504Z * [new branch] gh/isuruf/116/orig -> origin/gh/isuruf/116/orig 2025-08-14T21:24:06.0173659Z * [new branch] gh/isuruf/141/base -> origin/gh/isuruf/141/base 2025-08-14T21:24:06.0173811Z * [new branch] gh/isuruf/141/head -> origin/gh/isuruf/141/head 2025-08-14T21:24:06.0173969Z * [new branch] gh/isuruf/141/orig -> origin/gh/isuruf/141/orig 2025-08-14T21:24:06.0174583Z * [new branch] gh/isuruf/142/base -> origin/gh/isuruf/142/base 2025-08-14T21:24:06.0175522Z * [new branch] gh/isuruf/142/head -> origin/gh/isuruf/142/head 2025-08-14T21:24:06.0175868Z * [new branch] gh/isuruf/142/orig -> origin/gh/isuruf/142/orig 2025-08-14T21:24:06.0177886Z * [new branch] gh/isuruf/81/base -> origin/gh/isuruf/81/base 2025-08-14T21:24:06.0178168Z * [new branch] gh/isuruf/81/head -> origin/gh/isuruf/81/head 2025-08-14T21:24:06.0179294Z * [new branch] gh/isuruf/81/orig -> origin/gh/isuruf/81/orig 2025-08-14T21:24:06.0183396Z * [new branch] gh/jamesjwu/140/base -> origin/gh/jamesjwu/140/base 2025-08-14T21:24:06.0183578Z * [new branch] gh/jamesjwu/140/head -> origin/gh/jamesjwu/140/head 2025-08-14T21:24:06.0183742Z * [new branch] gh/jamesjwu/140/orig -> origin/gh/jamesjwu/140/orig 2025-08-14T21:24:06.0183887Z * [new branch] gh/jamesjwu/150/base -> origin/gh/jamesjwu/150/base 2025-08-14T21:24:06.0184045Z * [new branch] gh/jamesjwu/150/head -> origin/gh/jamesjwu/150/head 2025-08-14T21:24:06.0187382Z * [new branch] gh/jamesjwu/150/orig -> origin/gh/jamesjwu/150/orig 2025-08-14T21:24:06.0187680Z * [new branch] gh/jamesjwu/154/base -> origin/gh/jamesjwu/154/base 2025-08-14T21:24:06.0192499Z * [new branch] gh/jamesjwu/154/head -> origin/gh/jamesjwu/154/head 2025-08-14T21:24:06.0196941Z * [new branch] gh/jamesjwu/154/orig -> origin/gh/jamesjwu/154/orig 2025-08-14T21:24:06.0201153Z * [new branch] gh/jamesjwu/155/base -> origin/gh/jamesjwu/155/base 2025-08-14T21:24:06.0205481Z * [new branch] gh/jamesjwu/155/head -> origin/gh/jamesjwu/155/head 2025-08-14T21:24:06.0209892Z * [new branch] gh/jamesjwu/155/orig -> origin/gh/jamesjwu/155/orig 2025-08-14T21:24:06.0211119Z * [new branch] gh/jamesjwu/159/base -> origin/gh/jamesjwu/159/base 2025-08-14T21:24:06.0211684Z * [new branch] gh/jamesjwu/159/head -> origin/gh/jamesjwu/159/head 2025-08-14T21:24:06.0211843Z * [new branch] gh/jamesjwu/159/orig -> origin/gh/jamesjwu/159/orig 2025-08-14T21:24:06.0211987Z * [new branch] gh/jamesjwu/163/base -> origin/gh/jamesjwu/163/base 2025-08-14T21:24:06.0212157Z * [new branch] gh/jamesjwu/163/head -> origin/gh/jamesjwu/163/head 2025-08-14T21:24:06.0212298Z * [new branch] gh/jamesjwu/163/orig -> origin/gh/jamesjwu/163/orig 2025-08-14T21:24:06.0212436Z * [new branch] gh/jamesjwu/171/base -> origin/gh/jamesjwu/171/base 2025-08-14T21:24:06.0212582Z * [new branch] gh/jamesjwu/171/head -> origin/gh/jamesjwu/171/head 2025-08-14T21:24:06.0212718Z * [new branch] gh/jamesjwu/171/orig -> origin/gh/jamesjwu/171/orig 2025-08-14T21:24:06.0212859Z * [new branch] gh/jamesjwu/174/base -> origin/gh/jamesjwu/174/base 2025-08-14T21:24:06.0213002Z * [new branch] gh/jamesjwu/174/head -> origin/gh/jamesjwu/174/head 2025-08-14T21:24:06.0213149Z * [new branch] gh/jamesjwu/174/orig -> origin/gh/jamesjwu/174/orig 2025-08-14T21:24:06.0213297Z * [new branch] gh/jamesjwu/175/base -> origin/gh/jamesjwu/175/base 2025-08-14T21:24:06.0213566Z * [new branch] gh/jamesjwu/175/head -> origin/gh/jamesjwu/175/head 2025-08-14T21:24:06.0213719Z * [new branch] gh/jamesjwu/175/orig -> origin/gh/jamesjwu/175/orig 2025-08-14T21:24:06.0213858Z * [new branch] gh/jamesjwu/176/base -> origin/gh/jamesjwu/176/base 2025-08-14T21:24:06.0213997Z * [new branch] gh/jamesjwu/176/head -> origin/gh/jamesjwu/176/head 2025-08-14T21:24:06.0214149Z * [new branch] gh/jamesjwu/176/orig -> origin/gh/jamesjwu/176/orig 2025-08-14T21:24:06.0214291Z * [new branch] gh/jamesjwu/177/base -> origin/gh/jamesjwu/177/base 2025-08-14T21:24:06.0214434Z * [new branch] gh/jamesjwu/177/head -> origin/gh/jamesjwu/177/head 2025-08-14T21:24:06.0214586Z * [new branch] gh/jamesjwu/177/orig -> origin/gh/jamesjwu/177/orig 2025-08-14T21:24:06.0214785Z * [new branch] gh/jamesjwu/178/base -> origin/gh/jamesjwu/178/base 2025-08-14T21:24:06.0214936Z * [new branch] gh/jamesjwu/178/head -> origin/gh/jamesjwu/178/head 2025-08-14T21:24:06.0215077Z * [new branch] gh/jamesjwu/178/orig -> origin/gh/jamesjwu/178/orig 2025-08-14T21:24:06.0215216Z * [new branch] gh/jamesjwu/179/base -> origin/gh/jamesjwu/179/base 2025-08-14T21:24:06.0215365Z * [new branch] gh/jamesjwu/179/head -> origin/gh/jamesjwu/179/head 2025-08-14T21:24:06.0215505Z * [new branch] gh/jamesjwu/179/orig -> origin/gh/jamesjwu/179/orig 2025-08-14T21:24:06.0215654Z * [new branch] gh/jamesjwu/180/base -> origin/gh/jamesjwu/180/base 2025-08-14T21:24:06.0215793Z * [new branch] gh/jamesjwu/180/head -> origin/gh/jamesjwu/180/head 2025-08-14T21:24:06.0215935Z * [new branch] gh/jamesjwu/180/orig -> origin/gh/jamesjwu/180/orig 2025-08-14T21:24:06.0216098Z * [new branch] gh/jamesjwu/181/base -> origin/gh/jamesjwu/181/base 2025-08-14T21:24:06.0216237Z * [new branch] gh/jamesjwu/181/head -> origin/gh/jamesjwu/181/head 2025-08-14T21:24:06.0216384Z * [new branch] gh/jamesjwu/181/orig -> origin/gh/jamesjwu/181/orig 2025-08-14T21:24:06.0216530Z * [new branch] gh/jamesjwu/182/base -> origin/gh/jamesjwu/182/base 2025-08-14T21:24:06.0216953Z * [new branch] gh/jamesjwu/182/head -> origin/gh/jamesjwu/182/head 2025-08-14T21:24:06.0218316Z * [new branch] gh/jamesjwu/182/orig -> origin/gh/jamesjwu/182/orig 2025-08-14T21:24:06.0221349Z * [new branch] gh/jamesjwu/183/base -> origin/gh/jamesjwu/183/base 2025-08-14T21:24:06.0221690Z * [new branch] gh/jamesjwu/183/head -> origin/gh/jamesjwu/183/head 2025-08-14T21:24:06.0221867Z * [new branch] gh/jamesjwu/183/orig -> origin/gh/jamesjwu/183/orig 2025-08-14T21:24:06.0222149Z * [new branch] gh/jamesjwu/184/base -> origin/gh/jamesjwu/184/base 2025-08-14T21:24:06.0222339Z * [new branch] gh/jamesjwu/184/head -> origin/gh/jamesjwu/184/head 2025-08-14T21:24:06.0225944Z * [new branch] gh/jamesjwu/184/orig -> origin/gh/jamesjwu/184/orig 2025-08-14T21:24:06.0226222Z * [new branch] gh/jamesjwu/52/base -> origin/gh/jamesjwu/52/base 2025-08-14T21:24:06.0226515Z * [new branch] gh/jamesjwu/52/head -> origin/gh/jamesjwu/52/head 2025-08-14T21:24:06.0226719Z * [new branch] gh/jamesjwu/53/base -> origin/gh/jamesjwu/53/base 2025-08-14T21:24:06.0227003Z * [new branch] gh/jamesjwu/53/head -> origin/gh/jamesjwu/53/head 2025-08-14T21:24:06.0228964Z * [new branch] gh/jamesjwu/54/base -> origin/gh/jamesjwu/54/base 2025-08-14T21:24:06.0229142Z * [new branch] gh/jamesjwu/54/head -> origin/gh/jamesjwu/54/head 2025-08-14T21:24:06.0229551Z * [new branch] gh/jamesjwu/55/base -> origin/gh/jamesjwu/55/base 2025-08-14T21:24:06.0229847Z * [new branch] gh/jamesjwu/55/head -> origin/gh/jamesjwu/55/head 2025-08-14T21:24:06.0230012Z * [new branch] gh/jamesjwu/56/base -> origin/gh/jamesjwu/56/base 2025-08-14T21:24:06.0235265Z * [new branch] gh/jamesjwu/56/head -> origin/gh/jamesjwu/56/head 2025-08-14T21:24:06.0235589Z * [new branch] gh/jamesjwu/57/base -> origin/gh/jamesjwu/57/base 2025-08-14T21:24:06.0235766Z * [new branch] gh/jamesjwu/57/head -> origin/gh/jamesjwu/57/head 2025-08-14T21:24:06.0235930Z * [new branch] gh/jamesjwu/58/base -> origin/gh/jamesjwu/58/base 2025-08-14T21:24:06.0236227Z * [new branch] gh/jamesjwu/58/head -> origin/gh/jamesjwu/58/head 2025-08-14T21:24:06.0236691Z * [new branch] gh/jamesjwu/59/base -> origin/gh/jamesjwu/59/base 2025-08-14T21:24:06.0237398Z * [new branch] gh/jamesjwu/59/head -> origin/gh/jamesjwu/59/head 2025-08-14T21:24:06.0237587Z * [new branch] gh/jamesjwu/60/base -> origin/gh/jamesjwu/60/base 2025-08-14T21:24:06.0239926Z * [new branch] gh/jamesjwu/60/head -> origin/gh/jamesjwu/60/head 2025-08-14T21:24:06.0240256Z * [new branch] gh/jamesjwu/61/base -> origin/gh/jamesjwu/61/base 2025-08-14T21:24:06.0240451Z * [new branch] gh/jamesjwu/61/head -> origin/gh/jamesjwu/61/head 2025-08-14T21:24:06.0240602Z * [new branch] gh/jamesjwu/62/base -> origin/gh/jamesjwu/62/base 2025-08-14T21:24:06.0240758Z * [new branch] gh/jamesjwu/62/head -> origin/gh/jamesjwu/62/head 2025-08-14T21:24:06.0240907Z * [new branch] gh/jamesjwu/63/base -> origin/gh/jamesjwu/63/base 2025-08-14T21:24:06.0241200Z * [new branch] gh/jamesjwu/63/head -> origin/gh/jamesjwu/63/head 2025-08-14T21:24:06.0246012Z * [new branch] gh/jamesjwu/64/base -> origin/gh/jamesjwu/64/base 2025-08-14T21:24:06.0246400Z * [new branch] gh/jamesjwu/64/head -> origin/gh/jamesjwu/64/head 2025-08-14T21:24:06.0246568Z * [new branch] gh/jamesjwu/65/base -> origin/gh/jamesjwu/65/base 2025-08-14T21:24:06.0246723Z * [new branch] gh/jamesjwu/65/head -> origin/gh/jamesjwu/65/head 2025-08-14T21:24:06.0246902Z * [new branch] gh/janeyx99/165/base -> origin/gh/janeyx99/165/base 2025-08-14T21:24:06.0247069Z * [new branch] gh/janeyx99/165/head -> origin/gh/janeyx99/165/head 2025-08-14T21:24:06.0247222Z * [new branch] gh/janeyx99/165/orig -> origin/gh/janeyx99/165/orig 2025-08-14T21:24:06.0247379Z * [new branch] gh/janeyx99/201/base -> origin/gh/janeyx99/201/base 2025-08-14T21:24:06.0247544Z * [new branch] gh/janeyx99/201/head -> origin/gh/janeyx99/201/head 2025-08-14T21:24:06.0251590Z * [new branch] gh/janeyx99/201/orig -> origin/gh/janeyx99/201/orig 2025-08-14T21:24:06.0252424Z * [new branch] gh/janeyx99/225/base -> origin/gh/janeyx99/225/base 2025-08-14T21:24:06.0252836Z * [new branch] gh/janeyx99/225/head -> origin/gh/janeyx99/225/head 2025-08-14T21:24:06.0253133Z * [new branch] gh/janeyx99/225/orig -> origin/gh/janeyx99/225/orig 2025-08-14T21:24:06.0253300Z * [new branch] gh/janeyx99/256/base -> origin/gh/janeyx99/256/base 2025-08-14T21:24:06.0253446Z * [new branch] gh/janeyx99/256/head -> origin/gh/janeyx99/256/head 2025-08-14T21:24:06.0253601Z * [new branch] gh/janeyx99/256/orig -> origin/gh/janeyx99/256/orig 2025-08-14T21:24:06.0253755Z * [new branch] gh/janeyx99/268/base -> origin/gh/janeyx99/268/base 2025-08-14T21:24:06.0254344Z * [new branch] gh/janeyx99/268/head -> origin/gh/janeyx99/268/head 2025-08-14T21:24:06.0254655Z * [new branch] gh/janeyx99/268/orig -> origin/gh/janeyx99/268/orig 2025-08-14T21:24:06.0255589Z * [new branch] gh/janeyx99/269/base -> origin/gh/janeyx99/269/base 2025-08-14T21:24:06.0255880Z * [new branch] gh/janeyx99/269/head -> origin/gh/janeyx99/269/head 2025-08-14T21:24:06.0256979Z * [new branch] gh/janeyx99/269/orig -> origin/gh/janeyx99/269/orig 2025-08-14T21:24:06.0258329Z * [new branch] gh/janeyx99/274/base -> origin/gh/janeyx99/274/base 2025-08-14T21:24:06.0258538Z * [new branch] gh/janeyx99/274/head -> origin/gh/janeyx99/274/head 2025-08-14T21:24:06.0259294Z * [new branch] gh/janeyx99/274/orig -> origin/gh/janeyx99/274/orig 2025-08-14T21:24:06.0263330Z * [new branch] gh/janeyx99/276/base -> origin/gh/janeyx99/276/base 2025-08-14T21:24:06.0268266Z * [new branch] gh/janeyx99/276/head -> origin/gh/janeyx99/276/head 2025-08-14T21:24:06.0273861Z * [new branch] gh/janeyx99/276/orig -> origin/gh/janeyx99/276/orig 2025-08-14T21:24:06.0279655Z * [new branch] gh/janeyx99/277/base -> origin/gh/janeyx99/277/base 2025-08-14T21:24:06.0281188Z * [new branch] gh/janeyx99/277/head -> origin/gh/janeyx99/277/head 2025-08-14T21:24:06.0281370Z * [new branch] gh/janeyx99/277/orig -> origin/gh/janeyx99/277/orig 2025-08-14T21:24:06.0281511Z * [new branch] gh/janeyx99/278/base -> origin/gh/janeyx99/278/base 2025-08-14T21:24:06.0281659Z * [new branch] gh/janeyx99/278/head -> origin/gh/janeyx99/278/head 2025-08-14T21:24:06.0281799Z * [new branch] gh/janeyx99/278/orig -> origin/gh/janeyx99/278/orig 2025-08-14T21:24:06.0281952Z * [new branch] gh/janeyx99/279/base -> origin/gh/janeyx99/279/base 2025-08-14T21:24:06.0282101Z * [new branch] gh/janeyx99/279/head -> origin/gh/janeyx99/279/head 2025-08-14T21:24:06.0282238Z * [new branch] gh/janeyx99/279/orig -> origin/gh/janeyx99/279/orig 2025-08-14T21:24:06.0282381Z * [new branch] gh/janeyx99/280/base -> origin/gh/janeyx99/280/base 2025-08-14T21:24:06.0282516Z * [new branch] gh/janeyx99/280/head -> origin/gh/janeyx99/280/head 2025-08-14T21:24:06.0282649Z * [new branch] gh/janeyx99/280/orig -> origin/gh/janeyx99/280/orig 2025-08-14T21:24:06.0282830Z * [new branch] gh/janeyx99/281/base -> origin/gh/janeyx99/281/base 2025-08-14T21:24:06.0282976Z * [new branch] gh/janeyx99/281/head -> origin/gh/janeyx99/281/head 2025-08-14T21:24:06.0283120Z * [new branch] gh/janeyx99/281/orig -> origin/gh/janeyx99/281/orig 2025-08-14T21:24:06.0283263Z * [new branch] gh/janeyx99/282/base -> origin/gh/janeyx99/282/base 2025-08-14T21:24:06.0283398Z * [new branch] gh/janeyx99/282/head -> origin/gh/janeyx99/282/head 2025-08-14T21:24:06.0283540Z * [new branch] gh/janeyx99/282/orig -> origin/gh/janeyx99/282/orig 2025-08-14T21:24:06.0283676Z * [new branch] gh/janeyx99/283/base -> origin/gh/janeyx99/283/base 2025-08-14T21:24:06.0283818Z * [new branch] gh/janeyx99/283/head -> origin/gh/janeyx99/283/head 2025-08-14T21:24:06.0283952Z * [new branch] gh/janeyx99/283/orig -> origin/gh/janeyx99/283/orig 2025-08-14T21:24:06.0284085Z * [new branch] gh/janeyx99/284/base -> origin/gh/janeyx99/284/base 2025-08-14T21:24:06.0284223Z * [new branch] gh/janeyx99/284/head -> origin/gh/janeyx99/284/head 2025-08-14T21:24:06.0284359Z * [new branch] gh/janeyx99/284/orig -> origin/gh/janeyx99/284/orig 2025-08-14T21:24:06.0284630Z * [new branch] gh/janeyx99/285/base -> origin/gh/janeyx99/285/base 2025-08-14T21:24:06.0284784Z * [new branch] gh/janeyx99/285/head -> origin/gh/janeyx99/285/head 2025-08-14T21:24:06.0284919Z * [new branch] gh/janeyx99/285/orig -> origin/gh/janeyx99/285/orig 2025-08-14T21:24:06.0287276Z * [new branch] gh/janeyx99/286/base -> origin/gh/janeyx99/286/base 2025-08-14T21:24:06.0288000Z * [new branch] gh/janeyx99/286/head -> origin/gh/janeyx99/286/head 2025-08-14T21:24:06.0288292Z * [new branch] gh/janeyx99/286/orig -> origin/gh/janeyx99/286/orig 2025-08-14T21:24:06.0288548Z * [new branch] gh/janeyx99/287/base -> origin/gh/janeyx99/287/base 2025-08-14T21:24:06.0288793Z * [new branch] gh/janeyx99/287/head -> origin/gh/janeyx99/287/head 2025-08-14T21:24:06.0289231Z * [new branch] gh/janeyx99/287/orig -> origin/gh/janeyx99/287/orig 2025-08-14T21:24:06.0289495Z * [new branch] gh/janeyx99/288/base -> origin/gh/janeyx99/288/base 2025-08-14T21:24:06.0289758Z * [new branch] gh/janeyx99/288/head -> origin/gh/janeyx99/288/head 2025-08-14T21:24:06.0289984Z * [new branch] gh/janeyx99/288/orig -> origin/gh/janeyx99/288/orig 2025-08-14T21:24:06.0290666Z * [new branch] gh/janeyx99/289/base -> origin/gh/janeyx99/289/base 2025-08-14T21:24:06.0294606Z * [new branch] gh/janeyx99/289/head -> origin/gh/janeyx99/289/head 2025-08-14T21:24:06.0294797Z * [new branch] gh/janeyx99/289/orig -> origin/gh/janeyx99/289/orig 2025-08-14T21:24:06.0294982Z * [new branch] gh/janeyx99/290/base -> origin/gh/janeyx99/290/base 2025-08-14T21:24:06.0295132Z * [new branch] gh/janeyx99/290/head -> origin/gh/janeyx99/290/head 2025-08-14T21:24:06.0295308Z * [new branch] gh/janeyx99/290/orig -> origin/gh/janeyx99/290/orig 2025-08-14T21:24:06.0295911Z * [new branch] gh/janeyx99/291/base -> origin/gh/janeyx99/291/base 2025-08-14T21:24:06.0296758Z * [new branch] gh/janeyx99/291/head -> origin/gh/janeyx99/291/head 2025-08-14T21:24:06.0297463Z * [new branch] gh/janeyx99/291/orig -> origin/gh/janeyx99/291/orig 2025-08-14T21:24:06.0298780Z * [new branch] gh/janeyx99/292/base -> origin/gh/janeyx99/292/base 2025-08-14T21:24:06.0299020Z * [new branch] gh/janeyx99/292/head -> origin/gh/janeyx99/292/head 2025-08-14T21:24:06.0300032Z * [new branch] gh/janeyx99/292/orig -> origin/gh/janeyx99/292/orig 2025-08-14T21:24:06.0301101Z * [new branch] gh/janeyx99/293/base -> origin/gh/janeyx99/293/base 2025-08-14T21:24:06.0301585Z * [new branch] gh/janeyx99/293/head -> origin/gh/janeyx99/293/head 2025-08-14T21:24:06.0302275Z * [new branch] gh/janeyx99/293/orig -> origin/gh/janeyx99/293/orig 2025-08-14T21:24:06.0303806Z * [new branch] gh/janeyx99/294/base -> origin/gh/janeyx99/294/base 2025-08-14T21:24:06.0304217Z * [new branch] gh/janeyx99/294/head -> origin/gh/janeyx99/294/head 2025-08-14T21:24:06.0304686Z * [new branch] gh/janeyx99/294/orig -> origin/gh/janeyx99/294/orig 2025-08-14T21:24:06.0306698Z * [new branch] gh/janeyx99/295/base -> origin/gh/janeyx99/295/base 2025-08-14T21:24:06.0306887Z * [new branch] gh/janeyx99/295/head -> origin/gh/janeyx99/295/head 2025-08-14T21:24:06.0307053Z * [new branch] gh/janeyx99/295/orig -> origin/gh/janeyx99/295/orig 2025-08-14T21:24:06.0311614Z * [new branch] gh/janeyx99/296/base -> origin/gh/janeyx99/296/base 2025-08-14T21:24:06.0313483Z * [new branch] gh/janeyx99/296/head -> origin/gh/janeyx99/296/head 2025-08-14T21:24:06.0313821Z * [new branch] gh/janeyx99/296/orig -> origin/gh/janeyx99/296/orig 2025-08-14T21:24:06.0313980Z * [new branch] gh/janeyx99/297/base -> origin/gh/janeyx99/297/base 2025-08-14T21:24:06.0314133Z * [new branch] gh/janeyx99/297/head -> origin/gh/janeyx99/297/head 2025-08-14T21:24:06.0314279Z * [new branch] gh/janeyx99/297/orig -> origin/gh/janeyx99/297/orig 2025-08-14T21:24:06.0314428Z * [new branch] gh/janeyx99/298/base -> origin/gh/janeyx99/298/base 2025-08-14T21:24:06.0314579Z * [new branch] gh/janeyx99/298/head -> origin/gh/janeyx99/298/head 2025-08-14T21:24:06.0317587Z * [new branch] gh/janeyx99/298/orig -> origin/gh/janeyx99/298/orig 2025-08-14T21:24:06.0317746Z * [new branch] gh/janeyx99/299/base -> origin/gh/janeyx99/299/base 2025-08-14T21:24:06.0317891Z * [new branch] gh/janeyx99/299/head -> origin/gh/janeyx99/299/head 2025-08-14T21:24:06.0318189Z * [new branch] gh/janeyx99/299/orig -> origin/gh/janeyx99/299/orig 2025-08-14T21:24:06.0318345Z * [new branch] gh/janeyx99/300/base -> origin/gh/janeyx99/300/base 2025-08-14T21:24:06.0322830Z * [new branch] gh/janeyx99/300/head -> origin/gh/janeyx99/300/head 2025-08-14T21:24:06.0326462Z * [new branch] gh/janeyx99/300/orig -> origin/gh/janeyx99/300/orig 2025-08-14T21:24:06.0326634Z * [new branch] gh/janeyx99/88/base -> origin/gh/janeyx99/88/base 2025-08-14T21:24:06.0326782Z * [new branch] gh/janeyx99/88/head -> origin/gh/janeyx99/88/head 2025-08-14T21:24:06.0326956Z * [new branch] gh/janeyx99/88/orig -> origin/gh/janeyx99/88/orig 2025-08-14T21:24:06.0327115Z * [new branch] gh/jansel/360/base -> origin/gh/jansel/360/base 2025-08-14T21:24:06.0327301Z * [new branch] gh/jansel/360/head -> origin/gh/jansel/360/head 2025-08-14T21:24:06.0327492Z * [new branch] gh/jansel/451/base -> origin/gh/jansel/451/base 2025-08-14T21:24:06.0327636Z * [new branch] gh/jansel/451/head -> origin/gh/jansel/451/head 2025-08-14T21:24:06.0327792Z * [new branch] gh/jansel/451/orig -> origin/gh/jansel/451/orig 2025-08-14T21:24:06.0331792Z * [new branch] gh/jansel/462/base -> origin/gh/jansel/462/base 2025-08-14T21:24:06.0331961Z * [new branch] gh/jansel/462/head -> origin/gh/jansel/462/head 2025-08-14T21:24:06.0332189Z * [new branch] gh/jansel/462/orig -> origin/gh/jansel/462/orig 2025-08-14T21:24:06.0332339Z * [new branch] gh/jansel/531/base -> origin/gh/jansel/531/base 2025-08-14T21:24:06.0332585Z * [new branch] gh/jansel/531/head -> origin/gh/jansel/531/head 2025-08-14T21:24:06.0335155Z * [new branch] gh/jansel/531/orig -> origin/gh/jansel/531/orig 2025-08-14T21:24:06.0335368Z * [new branch] gh/jansel/534/base -> origin/gh/jansel/534/base 2025-08-14T21:24:06.0335515Z * [new branch] gh/jansel/534/head -> origin/gh/jansel/534/head 2025-08-14T21:24:06.0335662Z * [new branch] gh/jansel/534/orig -> origin/gh/jansel/534/orig 2025-08-14T21:24:06.0335842Z * [new branch] gh/jbschlosser/226/base -> origin/gh/jbschlosser/226/base 2025-08-14T21:24:06.0336000Z * [new branch] gh/jbschlosser/226/head -> origin/gh/jbschlosser/226/head 2025-08-14T21:24:06.0336164Z * [new branch] gh/jbschlosser/226/orig -> origin/gh/jbschlosser/226/orig 2025-08-14T21:24:06.0336360Z * [new branch] gh/jbschlosser/239/base -> origin/gh/jbschlosser/239/base 2025-08-14T21:24:06.0337546Z * [new branch] gh/jbschlosser/239/head -> origin/gh/jbschlosser/239/head 2025-08-14T21:24:06.0338159Z * [new branch] gh/jbschlosser/239/orig -> origin/gh/jbschlosser/239/orig 2025-08-14T21:24:06.0339653Z * [new branch] gh/jbschlosser/247/base -> origin/gh/jbschlosser/247/base 2025-08-14T21:24:06.0340115Z * [new branch] gh/jbschlosser/247/head -> origin/gh/jbschlosser/247/head 2025-08-14T21:24:06.0341564Z * [new branch] gh/jbschlosser/247/orig -> origin/gh/jbschlosser/247/orig 2025-08-14T21:24:06.0342978Z * [new branch] gh/jbschlosser/248/base -> origin/gh/jbschlosser/248/base 2025-08-14T21:24:06.0350073Z * [new branch] gh/jbschlosser/248/head -> origin/gh/jbschlosser/248/head 2025-08-14T21:24:06.0355778Z * [new branch] gh/jbschlosser/248/orig -> origin/gh/jbschlosser/248/orig 2025-08-14T21:24:06.0356144Z * [new branch] gh/jbschlosser/249/base -> origin/gh/jbschlosser/249/base 2025-08-14T21:24:06.0356687Z * [new branch] gh/jbschlosser/249/head -> origin/gh/jbschlosser/249/head 2025-08-14T21:24:06.0356914Z * [new branch] gh/jbschlosser/249/orig -> origin/gh/jbschlosser/249/orig 2025-08-14T21:24:06.0357617Z * [new branch] gh/jbschlosser/250/base -> origin/gh/jbschlosser/250/base 2025-08-14T21:24:06.0357823Z * [new branch] gh/jbschlosser/250/head -> origin/gh/jbschlosser/250/head 2025-08-14T21:24:06.0357987Z * [new branch] gh/jbschlosser/250/orig -> origin/gh/jbschlosser/250/orig 2025-08-14T21:24:06.0358162Z * [new branch] gh/jiayisunx/57/base -> origin/gh/jiayisunx/57/base 2025-08-14T21:24:06.0358311Z * [new branch] gh/jiayisunx/57/head -> origin/gh/jiayisunx/57/head 2025-08-14T21:24:06.0358462Z * [new branch] gh/jiayisunx/57/orig -> origin/gh/jiayisunx/57/orig 2025-08-14T21:24:06.0363117Z * [new branch] gh/jiayisunx/59/base -> origin/gh/jiayisunx/59/base 2025-08-14T21:24:06.0363491Z * [new branch] gh/jiayisunx/59/head -> origin/gh/jiayisunx/59/head 2025-08-14T21:24:06.0363689Z * [new branch] gh/jiayisunx/59/orig -> origin/gh/jiayisunx/59/orig 2025-08-14T21:24:06.0363882Z * [new branch] gh/jiayisunx/61/base -> origin/gh/jiayisunx/61/base 2025-08-14T21:24:06.0401256Z * [new branch] gh/jiayisunx/61/head -> origin/gh/jiayisunx/61/head 2025-08-14T21:24:06.0401715Z * [new branch] gh/jiayisunx/61/orig -> origin/gh/jiayisunx/61/orig 2025-08-14T21:24:06.0401998Z * [new branch] gh/jiayisunx/63/base -> origin/gh/jiayisunx/63/base 2025-08-14T21:24:06.0402167Z * [new branch] gh/jiayisunx/63/head -> origin/gh/jiayisunx/63/head 2025-08-14T21:24:06.0402352Z * [new branch] gh/jiayisunx/63/orig -> origin/gh/jiayisunx/63/orig 2025-08-14T21:24:06.0402553Z * [new branch] gh/jiayisunx/64/base -> origin/gh/jiayisunx/64/base 2025-08-14T21:24:06.0402745Z * [new branch] gh/jiayisunx/64/head -> origin/gh/jiayisunx/64/head 2025-08-14T21:24:06.0402941Z * [new branch] gh/jiayisunx/64/orig -> origin/gh/jiayisunx/64/orig 2025-08-14T21:24:06.0403084Z * [new branch] gh/jiayisunx/65/base -> origin/gh/jiayisunx/65/base 2025-08-14T21:24:06.0403239Z * [new branch] gh/jiayisunx/65/head -> origin/gh/jiayisunx/65/head 2025-08-14T21:24:06.0403382Z * [new branch] gh/jiayisunx/65/orig -> origin/gh/jiayisunx/65/orig 2025-08-14T21:24:06.0403524Z * [new branch] gh/jiayisunx/66/base -> origin/gh/jiayisunx/66/base 2025-08-14T21:24:06.0403671Z * [new branch] gh/jiayisunx/66/head -> origin/gh/jiayisunx/66/head 2025-08-14T21:24:06.0403805Z * [new branch] gh/jiayisunx/66/orig -> origin/gh/jiayisunx/66/orig 2025-08-14T21:24:06.0403952Z * [new branch] gh/jiayisunx/67/base -> origin/gh/jiayisunx/67/base 2025-08-14T21:24:06.0404301Z * [new branch] gh/jiayisunx/67/head -> origin/gh/jiayisunx/67/head 2025-08-14T21:24:06.0404444Z * [new branch] gh/jiayisunx/67/orig -> origin/gh/jiayisunx/67/orig 2025-08-14T21:24:06.0404596Z * [new branch] gh/jiayisunx/68/base -> origin/gh/jiayisunx/68/base 2025-08-14T21:24:06.0404737Z * [new branch] gh/jiayisunx/68/head -> origin/gh/jiayisunx/68/head 2025-08-14T21:24:06.0404885Z * [new branch] gh/jiayisunx/68/orig -> origin/gh/jiayisunx/68/orig 2025-08-14T21:24:06.0405053Z * [new branch] gh/jjwu@meta.com/1/base -> origin/gh/jjwu@meta.com/1/base 2025-08-14T21:24:06.0405209Z * [new branch] gh/jjwu@meta.com/1/head -> origin/gh/jjwu@meta.com/1/head 2025-08-14T21:24:06.0405375Z * [new branch] gh/justinchuby/111/base -> origin/gh/justinchuby/111/base 2025-08-14T21:24:06.0405581Z * [new branch] gh/justinchuby/111/head -> origin/gh/justinchuby/111/head 2025-08-14T21:24:06.0405744Z * [new branch] gh/justinchuby/111/orig -> origin/gh/justinchuby/111/orig 2025-08-14T21:24:06.0405898Z * [new branch] gh/kurtamohler/32/base -> origin/gh/kurtamohler/32/base 2025-08-14T21:24:06.0406052Z * [new branch] gh/kurtamohler/32/head -> origin/gh/kurtamohler/32/head 2025-08-14T21:24:06.0406208Z * [new branch] gh/kurtamohler/32/orig -> origin/gh/kurtamohler/32/orig 2025-08-14T21:24:06.0406354Z * [new branch] gh/kurtamohler/33/base -> origin/gh/kurtamohler/33/base 2025-08-14T21:24:06.0406502Z * [new branch] gh/kurtamohler/33/head -> origin/gh/kurtamohler/33/head 2025-08-14T21:24:06.0406661Z * [new branch] gh/kurtamohler/33/orig -> origin/gh/kurtamohler/33/orig 2025-08-14T21:24:06.0406826Z * [new branch] gh/kurtamohler/34/base -> origin/gh/kurtamohler/34/base 2025-08-14T21:24:06.0406988Z * [new branch] gh/kurtamohler/34/head -> origin/gh/kurtamohler/34/head 2025-08-14T21:24:06.0407136Z * [new branch] gh/kurtamohler/34/orig -> origin/gh/kurtamohler/34/orig 2025-08-14T21:24:06.0407292Z * [new branch] gh/kurtamohler/40/base -> origin/gh/kurtamohler/40/base 2025-08-14T21:24:06.0407438Z * [new branch] gh/kurtamohler/40/head -> origin/gh/kurtamohler/40/head 2025-08-14T21:24:06.0407583Z * [new branch] gh/kurtamohler/40/orig -> origin/gh/kurtamohler/40/orig 2025-08-14T21:24:06.0407738Z * [new branch] gh/kurtamohler/41/base -> origin/gh/kurtamohler/41/base 2025-08-14T21:24:06.0407886Z * [new branch] gh/kurtamohler/41/head -> origin/gh/kurtamohler/41/head 2025-08-14T21:24:06.0408032Z * [new branch] gh/kurtamohler/41/orig -> origin/gh/kurtamohler/41/orig 2025-08-14T21:24:06.0408195Z * [new branch] gh/kurtamohler/42/base -> origin/gh/kurtamohler/42/base 2025-08-14T21:24:06.0408343Z * [new branch] gh/kurtamohler/42/head -> origin/gh/kurtamohler/42/head 2025-08-14T21:24:06.0408500Z * [new branch] gh/kurtamohler/42/orig -> origin/gh/kurtamohler/42/orig 2025-08-14T21:24:06.0408645Z * [new branch] gh/kurtamohler/43/base -> origin/gh/kurtamohler/43/base 2025-08-14T21:24:06.0408792Z * [new branch] gh/kurtamohler/43/head -> origin/gh/kurtamohler/43/head 2025-08-14T21:24:06.0408945Z * [new branch] gh/kurtamohler/43/orig -> origin/gh/kurtamohler/43/orig 2025-08-14T21:24:06.0409092Z * [new branch] gh/kurtamohler/44/base -> origin/gh/kurtamohler/44/base 2025-08-14T21:24:06.0409244Z * [new branch] gh/kurtamohler/44/head -> origin/gh/kurtamohler/44/head 2025-08-14T21:24:06.0409388Z * [new branch] gh/kurtamohler/44/orig -> origin/gh/kurtamohler/44/orig 2025-08-14T21:24:06.0409575Z * [new branch] gh/kurtamohler/45/base -> origin/gh/kurtamohler/45/base 2025-08-14T21:24:06.0409729Z * [new branch] gh/kurtamohler/45/head -> origin/gh/kurtamohler/45/head 2025-08-14T21:24:06.0409873Z * [new branch] gh/kurtamohler/45/orig -> origin/gh/kurtamohler/45/orig 2025-08-14T21:24:06.0410025Z * [new branch] gh/kurtamohler/46/base -> origin/gh/kurtamohler/46/base 2025-08-14T21:24:06.0410168Z * [new branch] gh/kurtamohler/46/head -> origin/gh/kurtamohler/46/head 2025-08-14T21:24:06.0410312Z * [new branch] gh/kurtamohler/46/orig -> origin/gh/kurtamohler/46/orig 2025-08-14T21:24:06.0410463Z * [new branch] gh/kwen2501/130/base -> origin/gh/kwen2501/130/base 2025-08-14T21:24:06.0410599Z * [new branch] gh/kwen2501/130/head -> origin/gh/kwen2501/130/head 2025-08-14T21:24:06.0410734Z * [new branch] gh/kwen2501/130/orig -> origin/gh/kwen2501/130/orig 2025-08-14T21:24:06.0410938Z * [new branch] gh/kwen2501/142/base -> origin/gh/kwen2501/142/base 2025-08-14T21:24:06.0411075Z * [new branch] gh/kwen2501/142/head -> origin/gh/kwen2501/142/head 2025-08-14T21:24:06.0411217Z * [new branch] gh/kwen2501/142/orig -> origin/gh/kwen2501/142/orig 2025-08-14T21:24:06.0411361Z * [new branch] gh/kwen2501/15/base -> origin/gh/kwen2501/15/base 2025-08-14T21:24:06.0411506Z * [new branch] gh/kwen2501/15/head -> origin/gh/kwen2501/15/head 2025-08-14T21:24:06.0411652Z * [new branch] gh/kwen2501/156/base -> origin/gh/kwen2501/156/base 2025-08-14T21:24:06.0411788Z * [new branch] gh/kwen2501/156/head -> origin/gh/kwen2501/156/head 2025-08-14T21:24:06.0411936Z * [new branch] gh/kwen2501/156/orig -> origin/gh/kwen2501/156/orig 2025-08-14T21:24:06.0412073Z * [new branch] gh/kwen2501/170/base -> origin/gh/kwen2501/170/base 2025-08-14T21:24:06.0412210Z * [new branch] gh/kwen2501/170/head -> origin/gh/kwen2501/170/head 2025-08-14T21:24:06.0412492Z * [new branch] gh/kwen2501/179/base -> origin/gh/kwen2501/179/base 2025-08-14T21:24:06.0412641Z * [new branch] gh/kwen2501/179/head -> origin/gh/kwen2501/179/head 2025-08-14T21:24:06.0412867Z * [new branch] gh/kwen2501/179/orig -> origin/gh/kwen2501/179/orig 2025-08-14T21:24:06.0414254Z * [new branch] gh/kwen2501/181/base -> origin/gh/kwen2501/181/base 2025-08-14T21:24:06.0414924Z * [new branch] gh/kwen2501/181/head -> origin/gh/kwen2501/181/head 2025-08-14T21:24:06.0416783Z * [new branch] gh/kwen2501/181/orig -> origin/gh/kwen2501/181/orig 2025-08-14T21:24:06.0416957Z * [new branch] gh/kwen2501/183/base -> origin/gh/kwen2501/183/base 2025-08-14T21:24:06.0417125Z * [new branch] gh/kwen2501/183/head -> origin/gh/kwen2501/183/head 2025-08-14T21:24:06.0417335Z * [new branch] gh/kwen2501/183/orig -> origin/gh/kwen2501/183/orig 2025-08-14T21:24:06.0419120Z * [new branch] gh/kwen2501/184/base -> origin/gh/kwen2501/184/base 2025-08-14T21:24:06.0419416Z * [new branch] gh/kwen2501/184/head -> origin/gh/kwen2501/184/head 2025-08-14T21:24:06.0419584Z * [new branch] gh/kwen2501/184/orig -> origin/gh/kwen2501/184/orig 2025-08-14T21:24:06.0425864Z * [new branch] gh/kwen2501/186/base -> origin/gh/kwen2501/186/base 2025-08-14T21:24:06.0426032Z * [new branch] gh/kwen2501/186/head -> origin/gh/kwen2501/186/head 2025-08-14T21:24:06.0426173Z * [new branch] gh/kwen2501/186/orig -> origin/gh/kwen2501/186/orig 2025-08-14T21:24:06.0426312Z * [new branch] gh/kwen2501/187/base -> origin/gh/kwen2501/187/base 2025-08-14T21:24:06.0426617Z * [new branch] gh/kwen2501/187/head -> origin/gh/kwen2501/187/head 2025-08-14T21:24:06.0426756Z * [new branch] gh/kwen2501/187/orig -> origin/gh/kwen2501/187/orig 2025-08-14T21:24:06.0430499Z * [new branch] gh/kwen2501/188/base -> origin/gh/kwen2501/188/base 2025-08-14T21:24:06.0431082Z * [new branch] gh/kwen2501/188/head -> origin/gh/kwen2501/188/head 2025-08-14T21:24:06.0431253Z * [new branch] gh/kwen2501/188/orig -> origin/gh/kwen2501/188/orig 2025-08-14T21:24:06.0431416Z * [new branch] gh/kwen2501/194/base -> origin/gh/kwen2501/194/base 2025-08-14T21:24:06.0431580Z * [new branch] gh/kwen2501/194/head -> origin/gh/kwen2501/194/head 2025-08-14T21:24:06.0431745Z * [new branch] gh/kwen2501/194/orig -> origin/gh/kwen2501/194/orig 2025-08-14T21:24:06.0432805Z * [new branch] gh/kwen2501/195/base -> origin/gh/kwen2501/195/base 2025-08-14T21:24:06.0433130Z * [new branch] gh/kwen2501/195/head -> origin/gh/kwen2501/195/head 2025-08-14T21:24:06.0433278Z * [new branch] gh/kwen2501/195/orig -> origin/gh/kwen2501/195/orig 2025-08-14T21:24:06.0433436Z * [new branch] gh/kwen2501/196/base -> origin/gh/kwen2501/196/base 2025-08-14T21:24:06.0433581Z * [new branch] gh/kwen2501/196/head -> origin/gh/kwen2501/196/head 2025-08-14T21:24:06.0433746Z * [new branch] gh/kwen2501/196/orig -> origin/gh/kwen2501/196/orig 2025-08-14T21:24:06.0436592Z * [new branch] gh/kwen2501/197/base -> origin/gh/kwen2501/197/base 2025-08-14T21:24:06.0437162Z * [new branch] gh/kwen2501/197/head -> origin/gh/kwen2501/197/head 2025-08-14T21:24:06.0437341Z * [new branch] gh/kwen2501/197/orig -> origin/gh/kwen2501/197/orig 2025-08-14T21:24:06.0437522Z * [new branch] gh/kwen2501/198/base -> origin/gh/kwen2501/198/base 2025-08-14T21:24:06.0437678Z * [new branch] gh/kwen2501/198/head -> origin/gh/kwen2501/198/head 2025-08-14T21:24:06.0437823Z * [new branch] gh/kwen2501/198/orig -> origin/gh/kwen2501/198/orig 2025-08-14T21:24:06.0441146Z * [new branch] gh/kwen2501/199/base -> origin/gh/kwen2501/199/base 2025-08-14T21:24:06.0441362Z * [new branch] gh/kwen2501/199/head -> origin/gh/kwen2501/199/head 2025-08-14T21:24:06.0441526Z * [new branch] gh/kwen2501/199/orig -> origin/gh/kwen2501/199/orig 2025-08-14T21:24:06.0441675Z * [new branch] gh/kwen2501/200/base -> origin/gh/kwen2501/200/base 2025-08-14T21:24:06.0441962Z * [new branch] gh/kwen2501/200/head -> origin/gh/kwen2501/200/head 2025-08-14T21:24:06.0442122Z * [new branch] gh/kwen2501/200/orig -> origin/gh/kwen2501/200/orig 2025-08-14T21:24:06.0444906Z * [new branch] gh/kwen2501/201/base -> origin/gh/kwen2501/201/base 2025-08-14T21:24:06.0445163Z * [new branch] gh/kwen2501/201/head -> origin/gh/kwen2501/201/head 2025-08-14T21:24:06.0445309Z * [new branch] gh/kwen2501/201/orig -> origin/gh/kwen2501/201/orig 2025-08-14T21:24:06.0445462Z * [new branch] gh/kwen2501/202/base -> origin/gh/kwen2501/202/base 2025-08-14T21:24:06.0448981Z * [new branch] gh/kwen2501/202/head -> origin/gh/kwen2501/202/head 2025-08-14T21:24:06.0449312Z * [new branch] gh/kwen2501/202/orig -> origin/gh/kwen2501/202/orig 2025-08-14T21:24:06.0449478Z * [new branch] gh/kwen2501/203/base -> origin/gh/kwen2501/203/base 2025-08-14T21:24:06.0449626Z * [new branch] gh/kwen2501/203/head -> origin/gh/kwen2501/203/head 2025-08-14T21:24:06.0449775Z * [new branch] gh/kwen2501/203/orig -> origin/gh/kwen2501/203/orig 2025-08-14T21:24:06.0450163Z * [new branch] gh/laithsakka/152/base -> origin/gh/laithsakka/152/base 2025-08-14T21:24:06.0453194Z * [new branch] gh/laithsakka/152/head -> origin/gh/laithsakka/152/head 2025-08-14T21:24:06.0453821Z * [new branch] gh/laithsakka/152/orig -> origin/gh/laithsakka/152/orig 2025-08-14T21:24:06.0454036Z * [new branch] gh/laithsakka/156/base -> origin/gh/laithsakka/156/base 2025-08-14T21:24:06.0454217Z * [new branch] gh/laithsakka/156/head -> origin/gh/laithsakka/156/head 2025-08-14T21:24:06.0454398Z * [new branch] gh/laithsakka/156/orig -> origin/gh/laithsakka/156/orig 2025-08-14T21:24:06.0454579Z * [new branch] gh/laithsakka/159/base -> origin/gh/laithsakka/159/base 2025-08-14T21:24:06.0457729Z * [new branch] gh/laithsakka/159/head -> origin/gh/laithsakka/159/head 2025-08-14T21:24:06.0457914Z * [new branch] gh/laithsakka/159/orig -> origin/gh/laithsakka/159/orig 2025-08-14T21:24:06.0458327Z * [new branch] gh/laithsakka/160/base -> origin/gh/laithsakka/160/base 2025-08-14T21:24:06.0458492Z * [new branch] gh/laithsakka/160/head -> origin/gh/laithsakka/160/head 2025-08-14T21:24:06.0458659Z * [new branch] gh/laithsakka/160/orig -> origin/gh/laithsakka/160/orig 2025-08-14T21:24:06.0458816Z * [new branch] gh/laithsakka/178/base -> origin/gh/laithsakka/178/base 2025-08-14T21:24:06.0459329Z * [new branch] gh/laithsakka/178/head -> origin/gh/laithsakka/178/head 2025-08-14T21:24:06.0459506Z * [new branch] gh/laithsakka/178/orig -> origin/gh/laithsakka/178/orig 2025-08-14T21:24:06.0466487Z * [new branch] gh/laithsakka/191/base -> origin/gh/laithsakka/191/base 2025-08-14T21:24:06.0466828Z * [new branch] gh/laithsakka/191/head -> origin/gh/laithsakka/191/head 2025-08-14T21:24:06.0467057Z * [new branch] gh/laithsakka/191/orig -> origin/gh/laithsakka/191/orig 2025-08-14T21:24:06.0467320Z * [new branch] gh/laithsakka/234/base -> origin/gh/laithsakka/234/base 2025-08-14T21:24:06.0467567Z * [new branch] gh/laithsakka/234/head -> origin/gh/laithsakka/234/head 2025-08-14T21:24:06.0467735Z * [new branch] gh/laithsakka/234/orig -> origin/gh/laithsakka/234/orig 2025-08-14T21:24:06.0468307Z * [new branch] gh/laithsakka/237/base -> origin/gh/laithsakka/237/base 2025-08-14T21:24:06.0468498Z * [new branch] gh/laithsakka/237/head -> origin/gh/laithsakka/237/head 2025-08-14T21:24:06.0472448Z * [new branch] gh/laithsakka/237/orig -> origin/gh/laithsakka/237/orig 2025-08-14T21:24:06.0472792Z * [new branch] gh/laithsakka/238/base -> origin/gh/laithsakka/238/base 2025-08-14T21:24:06.0472968Z * [new branch] gh/laithsakka/238/head -> origin/gh/laithsakka/238/head 2025-08-14T21:24:06.0473278Z * [new branch] gh/laithsakka/238/orig -> origin/gh/laithsakka/238/orig 2025-08-14T21:24:06.0473427Z * [new branch] gh/laithsakka/239/base -> origin/gh/laithsakka/239/base 2025-08-14T21:24:06.0473720Z * [new branch] gh/laithsakka/239/head -> origin/gh/laithsakka/239/head 2025-08-14T21:24:06.0473888Z * [new branch] gh/laithsakka/239/orig -> origin/gh/laithsakka/239/orig 2025-08-14T21:24:06.0474545Z * [new branch] gh/laithsakka/240/base -> origin/gh/laithsakka/240/base 2025-08-14T21:24:06.0474746Z * [new branch] gh/laithsakka/240/head -> origin/gh/laithsakka/240/head 2025-08-14T21:24:06.0474900Z * [new branch] gh/laithsakka/240/orig -> origin/gh/laithsakka/240/orig 2025-08-14T21:24:06.0475056Z * [new branch] gh/laithsakka/242/base -> origin/gh/laithsakka/242/base 2025-08-14T21:24:06.0475234Z * [new branch] gh/laithsakka/242/head -> origin/gh/laithsakka/242/head 2025-08-14T21:24:06.0475699Z * [new branch] gh/laithsakka/242/orig -> origin/gh/laithsakka/242/orig 2025-08-14T21:24:06.0479676Z * [new branch] gh/laithsakka/243/base -> origin/gh/laithsakka/243/base 2025-08-14T21:24:06.0479977Z * [new branch] gh/laithsakka/243/head -> origin/gh/laithsakka/243/head 2025-08-14T21:24:06.0480192Z * [new branch] gh/laithsakka/243/orig -> origin/gh/laithsakka/243/orig 2025-08-14T21:24:06.0480530Z * [new branch] gh/laithsakka/244/base -> origin/gh/laithsakka/244/base 2025-08-14T21:24:06.0480801Z * [new branch] gh/laithsakka/244/head -> origin/gh/laithsakka/244/head 2025-08-14T21:24:06.0480976Z * [new branch] gh/laithsakka/244/orig -> origin/gh/laithsakka/244/orig 2025-08-14T21:24:06.0481208Z * [new branch] gh/laithsakka/245/base -> origin/gh/laithsakka/245/base 2025-08-14T21:24:06.0487472Z * [new branch] gh/laithsakka/245/head -> origin/gh/laithsakka/245/head 2025-08-14T21:24:06.0489832Z * [new branch] gh/laithsakka/245/orig -> origin/gh/laithsakka/245/orig 2025-08-14T21:24:06.0490111Z * [new branch] gh/laithsakka/246/base -> origin/gh/laithsakka/246/base 2025-08-14T21:24:06.0495144Z * [new branch] gh/laithsakka/246/head -> origin/gh/laithsakka/246/head 2025-08-14T21:24:06.0495487Z * [new branch] gh/laithsakka/246/orig -> origin/gh/laithsakka/246/orig 2025-08-14T21:24:06.0495720Z * [new branch] gh/laithsakka/247/base -> origin/gh/laithsakka/247/base 2025-08-14T21:24:06.0496072Z * [new branch] gh/laithsakka/247/head -> origin/gh/laithsakka/247/head 2025-08-14T21:24:06.0496236Z * [new branch] gh/laithsakka/247/orig -> origin/gh/laithsakka/247/orig 2025-08-14T21:24:06.0496392Z * [new branch] gh/laithsakka/248/base -> origin/gh/laithsakka/248/base 2025-08-14T21:24:06.0496578Z * [new branch] gh/laithsakka/248/head -> origin/gh/laithsakka/248/head 2025-08-14T21:24:06.0496742Z * [new branch] gh/laithsakka/248/orig -> origin/gh/laithsakka/248/orig 2025-08-14T21:24:06.0496894Z * [new branch] gh/laithsakka/249/base -> origin/gh/laithsakka/249/base 2025-08-14T21:24:06.0497056Z * [new branch] gh/laithsakka/249/head -> origin/gh/laithsakka/249/head 2025-08-14T21:24:06.0497208Z * [new branch] gh/laithsakka/249/orig -> origin/gh/laithsakka/249/orig 2025-08-14T21:24:06.0497357Z * [new branch] gh/laithsakka/250/base -> origin/gh/laithsakka/250/base 2025-08-14T21:24:06.0497512Z * [new branch] gh/laithsakka/250/head -> origin/gh/laithsakka/250/head 2025-08-14T21:24:06.0497658Z * [new branch] gh/laithsakka/250/orig -> origin/gh/laithsakka/250/orig 2025-08-14T21:24:06.0497811Z * [new branch] gh/laithsakka/251/base -> origin/gh/laithsakka/251/base 2025-08-14T21:24:06.0497971Z * [new branch] gh/laithsakka/251/head -> origin/gh/laithsakka/251/head 2025-08-14T21:24:06.0498118Z * [new branch] gh/laithsakka/251/orig -> origin/gh/laithsakka/251/orig 2025-08-14T21:24:06.0498270Z * [new branch] gh/laithsakka/252/base -> origin/gh/laithsakka/252/base 2025-08-14T21:24:06.0498424Z * [new branch] gh/laithsakka/252/head -> origin/gh/laithsakka/252/head 2025-08-14T21:24:06.0498570Z * [new branch] gh/laithsakka/252/orig -> origin/gh/laithsakka/252/orig 2025-08-14T21:24:06.0498724Z * [new branch] gh/laithsakka/253/base -> origin/gh/laithsakka/253/base 2025-08-14T21:24:06.0499025Z * [new branch] gh/laithsakka/253/head -> origin/gh/laithsakka/253/head 2025-08-14T21:24:06.0499197Z * [new branch] gh/laithsakka/253/orig -> origin/gh/laithsakka/253/orig 2025-08-14T21:24:06.0502009Z * [new branch] gh/laithsakka/254/base -> origin/gh/laithsakka/254/base 2025-08-14T21:24:06.0502309Z * [new branch] gh/laithsakka/254/head -> origin/gh/laithsakka/254/head 2025-08-14T21:24:06.0507520Z * [new branch] gh/laithsakka/254/orig -> origin/gh/laithsakka/254/orig 2025-08-14T21:24:06.0512423Z * [new branch] gh/laithsakka/255/base -> origin/gh/laithsakka/255/base 2025-08-14T21:24:06.0517445Z * [new branch] gh/laithsakka/255/head -> origin/gh/laithsakka/255/head 2025-08-14T21:24:06.0519880Z * [new branch] gh/laithsakka/255/orig -> origin/gh/laithsakka/255/orig 2025-08-14T21:24:06.0520139Z * [new branch] gh/laithsakka/256/base -> origin/gh/laithsakka/256/base 2025-08-14T21:24:06.0520318Z * [new branch] gh/laithsakka/256/head -> origin/gh/laithsakka/256/head 2025-08-14T21:24:06.0520643Z * [new branch] gh/laithsakka/256/orig -> origin/gh/laithsakka/256/orig 2025-08-14T21:24:06.0520803Z * [new branch] gh/laithsakka/257/base -> origin/gh/laithsakka/257/base 2025-08-14T21:24:06.0520953Z * [new branch] gh/laithsakka/257/head -> origin/gh/laithsakka/257/head 2025-08-14T21:24:06.0521192Z * [new branch] gh/laithsakka/257/orig -> origin/gh/laithsakka/257/orig 2025-08-14T21:24:06.0524640Z * [new branch] gh/laithsakka/258/base -> origin/gh/laithsakka/258/base 2025-08-14T21:24:06.0524842Z * [new branch] gh/laithsakka/258/head -> origin/gh/laithsakka/258/head 2025-08-14T21:24:06.0525008Z * [new branch] gh/laithsakka/258/orig -> origin/gh/laithsakka/258/orig 2025-08-14T21:24:06.0525175Z * [new branch] gh/laithsakka/259/base -> origin/gh/laithsakka/259/base 2025-08-14T21:24:06.0525328Z * [new branch] gh/laithsakka/259/head -> origin/gh/laithsakka/259/head 2025-08-14T21:24:06.0525533Z * [new branch] gh/laithsakka/259/orig -> origin/gh/laithsakka/259/orig 2025-08-14T21:24:06.0525695Z * [new branch] gh/laithsakka/260/base -> origin/gh/laithsakka/260/base 2025-08-14T21:24:06.0525854Z * [new branch] gh/laithsakka/260/head -> origin/gh/laithsakka/260/head 2025-08-14T21:24:06.0526014Z * [new branch] gh/laithsakka/260/orig -> origin/gh/laithsakka/260/orig 2025-08-14T21:24:06.0526169Z * [new branch] gh/laithsakka/261/base -> origin/gh/laithsakka/261/base 2025-08-14T21:24:06.0526326Z * [new branch] gh/laithsakka/261/head -> origin/gh/laithsakka/261/head 2025-08-14T21:24:06.0526491Z * [new branch] gh/laithsakka/261/orig -> origin/gh/laithsakka/261/orig 2025-08-14T21:24:06.0526642Z * [new branch] gh/laithsakka/262/base -> origin/gh/laithsakka/262/base 2025-08-14T21:24:06.0526805Z * [new branch] gh/laithsakka/262/head -> origin/gh/laithsakka/262/head 2025-08-14T21:24:06.0526968Z * [new branch] gh/laithsakka/262/orig -> origin/gh/laithsakka/262/orig 2025-08-14T21:24:06.0527124Z * [new branch] gh/laithsakka/28/base -> origin/gh/laithsakka/28/base 2025-08-14T21:24:06.0527286Z * [new branch] gh/laithsakka/29/base -> origin/gh/laithsakka/29/base 2025-08-14T21:24:06.0527437Z * [new branch] gh/laithsakka/30/base -> origin/gh/laithsakka/30/base 2025-08-14T21:24:06.0527594Z * [new branch] gh/laithsakka/30/head -> origin/gh/laithsakka/30/head 2025-08-14T21:24:06.0527746Z * [new branch] gh/laithsakka/31/base -> origin/gh/laithsakka/31/base 2025-08-14T21:24:06.0527968Z * [new branch] gh/laithsakka/31/head -> origin/gh/laithsakka/31/head 2025-08-14T21:24:06.0528143Z * [new branch] gh/laithsakka/32/base -> origin/gh/laithsakka/32/base 2025-08-14T21:24:06.0528389Z * [new branch] gh/laithsakka/32/head -> origin/gh/laithsakka/32/head 2025-08-14T21:24:06.0533337Z * [new branch] gh/lucaskabela/1/base -> origin/gh/lucaskabela/1/base 2025-08-14T21:24:06.0533540Z * [new branch] gh/lucaskabela/1/head -> origin/gh/lucaskabela/1/head 2025-08-14T21:24:06.0533710Z * [new branch] gh/lucaskabela/10/base -> origin/gh/lucaskabela/10/base 2025-08-14T21:24:06.0533884Z * [new branch] gh/lucaskabela/10/head -> origin/gh/lucaskabela/10/head 2025-08-14T21:24:06.0534236Z * [new branch] gh/lucaskabela/10/orig -> origin/gh/lucaskabela/10/orig 2025-08-14T21:24:06.0534642Z * [new branch] gh/lucaskabela/11/base -> origin/gh/lucaskabela/11/base 2025-08-14T21:24:06.0535840Z * [new branch] gh/lucaskabela/11/head -> origin/gh/lucaskabela/11/head 2025-08-14T21:24:06.0536081Z * [new branch] gh/lucaskabela/11/orig -> origin/gh/lucaskabela/11/orig 2025-08-14T21:24:06.0539401Z * [new branch] gh/lucaskabela/12/base -> origin/gh/lucaskabela/12/base 2025-08-14T21:24:06.0539607Z * [new branch] gh/lucaskabela/12/head -> origin/gh/lucaskabela/12/head 2025-08-14T21:24:06.0540060Z * [new branch] gh/lucaskabela/12/orig -> origin/gh/lucaskabela/12/orig 2025-08-14T21:24:06.0540234Z * [new branch] gh/lucaskabela/13/base -> origin/gh/lucaskabela/13/base 2025-08-14T21:24:06.0540681Z * [new branch] gh/lucaskabela/13/head -> origin/gh/lucaskabela/13/head 2025-08-14T21:24:06.0541319Z * [new branch] gh/lucaskabela/13/orig -> origin/gh/lucaskabela/13/orig 2025-08-14T21:24:06.0542369Z * [new branch] gh/lucaskabela/14/base -> origin/gh/lucaskabela/14/base 2025-08-14T21:24:06.0543754Z * [new branch] gh/lucaskabela/14/head -> origin/gh/lucaskabela/14/head 2025-08-14T21:24:06.0543927Z * [new branch] gh/lucaskabela/14/orig -> origin/gh/lucaskabela/14/orig 2025-08-14T21:24:06.0546336Z * [new branch] gh/lucaskabela/15/base -> origin/gh/lucaskabela/15/base 2025-08-14T21:24:06.0546692Z * [new branch] gh/lucaskabela/15/head -> origin/gh/lucaskabela/15/head 2025-08-14T21:24:06.0546902Z * [new branch] gh/lucaskabela/15/orig -> origin/gh/lucaskabela/15/orig 2025-08-14T21:24:06.0547117Z * [new branch] gh/lucaskabela/16/base -> origin/gh/lucaskabela/16/base 2025-08-14T21:24:06.0547598Z * [new branch] gh/lucaskabela/16/head -> origin/gh/lucaskabela/16/head 2025-08-14T21:24:06.0551814Z * [new branch] gh/lucaskabela/16/orig -> origin/gh/lucaskabela/16/orig 2025-08-14T21:24:06.0552159Z * [new branch] gh/lucaskabela/17/base -> origin/gh/lucaskabela/17/base 2025-08-14T21:24:06.0552360Z * [new branch] gh/lucaskabela/17/head -> origin/gh/lucaskabela/17/head 2025-08-14T21:24:06.0552627Z * [new branch] gh/lucaskabela/17/orig -> origin/gh/lucaskabela/17/orig 2025-08-14T21:24:06.0552808Z * [new branch] gh/lucaskabela/2/base -> origin/gh/lucaskabela/2/base 2025-08-14T21:24:06.0553077Z * [new branch] gh/lucaskabela/2/head -> origin/gh/lucaskabela/2/head 2025-08-14T21:24:06.0553753Z * [new branch] gh/lucaskabela/2/orig -> origin/gh/lucaskabela/2/orig 2025-08-14T21:24:06.0554085Z * [new branch] gh/lucaskabela/3/base -> origin/gh/lucaskabela/3/base 2025-08-14T21:24:06.0556677Z * [new branch] gh/lucaskabela/3/head -> origin/gh/lucaskabela/3/head 2025-08-14T21:24:06.0557039Z * [new branch] gh/lucaskabela/3/orig -> origin/gh/lucaskabela/3/orig 2025-08-14T21:24:06.0557430Z * [new branch] gh/lucaskabela/4/base -> origin/gh/lucaskabela/4/base 2025-08-14T21:24:06.0557607Z * [new branch] gh/lucaskabela/4/head -> origin/gh/lucaskabela/4/head 2025-08-14T21:24:06.0557991Z * [new branch] gh/lucaskabela/4/orig -> origin/gh/lucaskabela/4/orig 2025-08-14T21:24:06.0558916Z * [new branch] gh/lucaskabela/5/base -> origin/gh/lucaskabela/5/base 2025-08-14T21:24:06.0559145Z * [new branch] gh/lucaskabela/5/head -> origin/gh/lucaskabela/5/head 2025-08-14T21:24:06.0563101Z * [new branch] gh/lucaskabela/5/orig -> origin/gh/lucaskabela/5/orig 2025-08-14T21:24:06.0563445Z * [new branch] gh/lucaskabela/6/base -> origin/gh/lucaskabela/6/base 2025-08-14T21:24:06.0563622Z * [new branch] gh/lucaskabela/6/head -> origin/gh/lucaskabela/6/head 2025-08-14T21:24:06.0563882Z * [new branch] gh/lucaskabela/6/orig -> origin/gh/lucaskabela/6/orig 2025-08-14T21:24:06.0564055Z * [new branch] gh/lucaskabela/7/base -> origin/gh/lucaskabela/7/base 2025-08-14T21:24:06.0564580Z * [new branch] gh/lucaskabela/7/head -> origin/gh/lucaskabela/7/head 2025-08-14T21:24:06.0564749Z * [new branch] gh/lucaskabela/7/orig -> origin/gh/lucaskabela/7/orig 2025-08-14T21:24:06.0565791Z * [new branch] gh/lucaskabela/8/base -> origin/gh/lucaskabela/8/base 2025-08-14T21:24:06.0566453Z * [new branch] gh/lucaskabela/8/head -> origin/gh/lucaskabela/8/head 2025-08-14T21:24:06.0567109Z * [new branch] gh/lucaskabela/8/orig -> origin/gh/lucaskabela/8/orig 2025-08-14T21:24:06.0568267Z * [new branch] gh/lucaskabela/9/base -> origin/gh/lucaskabela/9/base 2025-08-14T21:24:06.0568646Z * [new branch] gh/lucaskabela/9/head -> origin/gh/lucaskabela/9/head 2025-08-14T21:24:06.0569550Z * [new branch] gh/lucaskabela/9/orig -> origin/gh/lucaskabela/9/orig 2025-08-14T21:24:06.0570749Z * [new branch] gh/lw/1/base -> origin/gh/lw/1/base 2025-08-14T21:24:06.0571091Z * [new branch] gh/lw/1/head -> origin/gh/lw/1/head 2025-08-14T21:24:06.0572130Z * [new branch] gh/lw/1/orig -> origin/gh/lw/1/orig 2025-08-14T21:24:06.0573026Z * [new branch] gh/lw/2/base -> origin/gh/lw/2/base 2025-08-14T21:24:06.0573499Z * [new branch] gh/lw/2/head -> origin/gh/lw/2/head 2025-08-14T21:24:06.0574544Z * [new branch] gh/lw/2/orig -> origin/gh/lw/2/orig 2025-08-14T21:24:06.0575168Z * [new branch] gh/lw/3/base -> origin/gh/lw/3/base 2025-08-14T21:24:06.0576569Z * [new branch] gh/lw/3/head -> origin/gh/lw/3/head 2025-08-14T21:24:06.0576925Z * [new branch] gh/lw/3/orig -> origin/gh/lw/3/orig 2025-08-14T21:24:06.0582811Z * [new branch] gh/malfet/14/base -> origin/gh/malfet/14/base 2025-08-14T21:24:06.0587118Z * [new branch] gh/malfet/330/base -> origin/gh/malfet/330/base 2025-08-14T21:24:06.0591424Z * [new branch] gh/malfet/330/head -> origin/gh/malfet/330/head 2025-08-14T21:24:06.0595700Z * [new branch] gh/malfet/330/orig -> origin/gh/malfet/330/orig 2025-08-14T21:24:06.0599273Z * [new branch] gh/malfet/396/base -> origin/gh/malfet/396/base 2025-08-14T21:24:06.0603579Z * [new branch] gh/malfet/396/head -> origin/gh/malfet/396/head 2025-08-14T21:24:06.0607705Z * [new branch] gh/malfet/396/orig -> origin/gh/malfet/396/orig 2025-08-14T21:24:06.0608089Z * [new branch] gh/malfet/397/base -> origin/gh/malfet/397/base 2025-08-14T21:24:06.0608229Z * [new branch] gh/malfet/397/head -> origin/gh/malfet/397/head 2025-08-14T21:24:06.0608358Z * [new branch] gh/malfet/397/orig -> origin/gh/malfet/397/orig 2025-08-14T21:24:06.0608509Z * [new branch] gh/malfet/398/base -> origin/gh/malfet/398/base 2025-08-14T21:24:06.0608809Z * [new branch] gh/malfet/398/head -> origin/gh/malfet/398/head 2025-08-14T21:24:06.0608954Z * [new branch] gh/malfet/398/orig -> origin/gh/malfet/398/orig 2025-08-14T21:24:06.0609086Z * [new branch] gh/malfet/399/base -> origin/gh/malfet/399/base 2025-08-14T21:24:06.0609212Z * [new branch] gh/malfet/399/head -> origin/gh/malfet/399/head 2025-08-14T21:24:06.0609346Z * [new branch] gh/malfet/399/orig -> origin/gh/malfet/399/orig 2025-08-14T21:24:06.0609484Z * [new branch] gh/malfet/414/base -> origin/gh/malfet/414/base 2025-08-14T21:24:06.0609618Z * [new branch] gh/malfet/414/head -> origin/gh/malfet/414/head 2025-08-14T21:24:06.0609750Z * [new branch] gh/malfet/414/orig -> origin/gh/malfet/414/orig 2025-08-14T21:24:06.0609952Z * [new branch] gh/malfet/417/base -> origin/gh/malfet/417/base 2025-08-14T21:24:06.0610097Z * [new branch] gh/malfet/417/head -> origin/gh/malfet/417/head 2025-08-14T21:24:06.0610231Z * [new branch] gh/malfet/417/orig -> origin/gh/malfet/417/orig 2025-08-14T21:24:06.0610379Z * [new branch] gh/malfet/418/base -> origin/gh/malfet/418/base 2025-08-14T21:24:06.0610506Z * [new branch] gh/malfet/418/head -> origin/gh/malfet/418/head 2025-08-14T21:24:06.0610635Z * [new branch] gh/malfet/418/orig -> origin/gh/malfet/418/orig 2025-08-14T21:24:06.0610769Z * [new branch] gh/malfet/422/base -> origin/gh/malfet/422/base 2025-08-14T21:24:06.0610897Z * [new branch] gh/malfet/422/head -> origin/gh/malfet/422/head 2025-08-14T21:24:06.0611034Z * [new branch] gh/malfet/422/orig -> origin/gh/malfet/422/orig 2025-08-14T21:24:06.0611178Z * [new branch] gh/malfet/438/base -> origin/gh/malfet/438/base 2025-08-14T21:24:06.0611323Z * [new branch] gh/malfet/438/head -> origin/gh/malfet/438/head 2025-08-14T21:24:06.0611463Z * [new branch] gh/malfet/438/orig -> origin/gh/malfet/438/orig 2025-08-14T21:24:06.0611606Z * [new branch] gh/malfet/439/base -> origin/gh/malfet/439/base 2025-08-14T21:24:06.0611748Z * [new branch] gh/malfet/439/head -> origin/gh/malfet/439/head 2025-08-14T21:24:06.0611901Z * [new branch] gh/malfet/439/orig -> origin/gh/malfet/439/orig 2025-08-14T21:24:06.0612046Z * [new branch] gh/malfet/440/base -> origin/gh/malfet/440/base 2025-08-14T21:24:06.0612199Z * [new branch] gh/malfet/440/head -> origin/gh/malfet/440/head 2025-08-14T21:24:06.0612343Z * [new branch] gh/malfet/440/orig -> origin/gh/malfet/440/orig 2025-08-14T21:24:06.0612480Z * [new branch] gh/malfet/441/base -> origin/gh/malfet/441/base 2025-08-14T21:24:06.0612632Z * [new branch] gh/malfet/441/head -> origin/gh/malfet/441/head 2025-08-14T21:24:06.0612768Z * [new branch] gh/malfet/441/orig -> origin/gh/malfet/441/orig 2025-08-14T21:24:06.0612913Z * [new branch] gh/malfet/442/base -> origin/gh/malfet/442/base 2025-08-14T21:24:06.0613057Z * [new branch] gh/malfet/442/head -> origin/gh/malfet/442/head 2025-08-14T21:24:06.0613198Z * [new branch] gh/malfet/442/orig -> origin/gh/malfet/442/orig 2025-08-14T21:24:06.0613343Z * [new branch] gh/malfet/443/base -> origin/gh/malfet/443/base 2025-08-14T21:24:06.0613484Z * [new branch] gh/malfet/443/head -> origin/gh/malfet/443/head 2025-08-14T21:24:06.0613628Z * [new branch] gh/malfet/443/orig -> origin/gh/malfet/443/orig 2025-08-14T21:24:06.0613814Z * [new branch] gh/malfet/444/base -> origin/gh/malfet/444/base 2025-08-14T21:24:06.0613963Z * [new branch] gh/malfet/444/head -> origin/gh/malfet/444/head 2025-08-14T21:24:06.0614113Z * [new branch] gh/malfet/444/orig -> origin/gh/malfet/444/orig 2025-08-14T21:24:06.0614259Z * [new branch] gh/malfet/445/base -> origin/gh/malfet/445/base 2025-08-14T21:24:06.0614403Z * [new branch] gh/malfet/445/head -> origin/gh/malfet/445/head 2025-08-14T21:24:06.0614719Z * [new branch] gh/malfet/445/orig -> origin/gh/malfet/445/orig 2025-08-14T21:24:06.0614889Z * [new branch] gh/malfet/446/base -> origin/gh/malfet/446/base 2025-08-14T21:24:06.0615908Z * [new branch] gh/malfet/446/head -> origin/gh/malfet/446/head 2025-08-14T21:24:06.0616068Z * [new branch] gh/malfet/446/orig -> origin/gh/malfet/446/orig 2025-08-14T21:24:06.0618042Z * [new branch] gh/malfet/447/base -> origin/gh/malfet/447/base 2025-08-14T21:24:06.0618391Z * [new branch] gh/malfet/447/head -> origin/gh/malfet/447/head 2025-08-14T21:24:06.0618777Z * [new branch] gh/malfet/448/base -> origin/gh/malfet/448/base 2025-08-14T21:24:06.0620643Z * [new branch] gh/malfet/448/head -> origin/gh/malfet/448/head 2025-08-14T21:24:06.0620959Z * [new branch] gh/malfet/449/base -> origin/gh/malfet/449/base 2025-08-14T21:24:06.0621141Z * [new branch] gh/malfet/449/head -> origin/gh/malfet/449/head 2025-08-14T21:24:06.0623191Z * [new branch] gh/malfet/450/base -> origin/gh/malfet/450/base 2025-08-14T21:24:06.0629872Z * [new branch] gh/malfet/450/head -> origin/gh/malfet/450/head 2025-08-14T21:24:06.0630012Z * [new branch] gh/malfet/451/base -> origin/gh/malfet/451/base 2025-08-14T21:24:06.0630171Z * [new branch] gh/malfet/451/head -> origin/gh/malfet/451/head 2025-08-14T21:24:06.0630302Z * [new branch] gh/malfet/452/base -> origin/gh/malfet/452/base 2025-08-14T21:24:06.0630434Z * [new branch] gh/malfet/452/head -> origin/gh/malfet/452/head 2025-08-14T21:24:06.0630573Z * [new branch] gh/malfet/452/orig -> origin/gh/malfet/452/orig 2025-08-14T21:24:06.0630702Z * [new branch] gh/malfet/453/base -> origin/gh/malfet/453/base 2025-08-14T21:24:06.0630841Z * [new branch] gh/malfet/453/head -> origin/gh/malfet/453/head 2025-08-14T21:24:06.0630971Z * [new branch] gh/malfet/453/orig -> origin/gh/malfet/453/orig 2025-08-14T21:24:06.0637101Z * [new branch] gh/malfet/454/base -> origin/gh/malfet/454/base 2025-08-14T21:24:06.0641539Z * [new branch] gh/malfet/454/head -> origin/gh/malfet/454/head 2025-08-14T21:24:06.0646014Z * [new branch] gh/malfet/454/orig -> origin/gh/malfet/454/orig 2025-08-14T21:24:06.0650234Z * [new branch] gh/malfet/455/base -> origin/gh/malfet/455/base 2025-08-14T21:24:06.0650438Z * [new branch] gh/malfet/455/head -> origin/gh/malfet/455/head 2025-08-14T21:24:06.0650581Z * [new branch] gh/malfet/455/orig -> origin/gh/malfet/455/orig 2025-08-14T21:24:06.0650727Z * [new branch] gh/malfet/456/base -> origin/gh/malfet/456/base 2025-08-14T21:24:06.0650864Z * [new branch] gh/malfet/456/head -> origin/gh/malfet/456/head 2025-08-14T21:24:06.0651001Z * [new branch] gh/malfet/456/orig -> origin/gh/malfet/456/orig 2025-08-14T21:24:06.0651143Z * [new branch] gh/malfet/457/base -> origin/gh/malfet/457/base 2025-08-14T21:24:06.0651276Z * [new branch] gh/malfet/457/head -> origin/gh/malfet/457/head 2025-08-14T21:24:06.0651631Z * [new branch] gh/malfet/457/orig -> origin/gh/malfet/457/orig 2025-08-14T21:24:06.0651787Z * [new branch] gh/malfet/458/base -> origin/gh/malfet/458/base 2025-08-14T21:24:06.0651924Z * [new branch] gh/malfet/458/head -> origin/gh/malfet/458/head 2025-08-14T21:24:06.0652068Z * [new branch] gh/malfet/458/orig -> origin/gh/malfet/458/orig 2025-08-14T21:24:06.0652206Z * [new branch] gh/malfet/459/base -> origin/gh/malfet/459/base 2025-08-14T21:24:06.0652340Z * [new branch] gh/malfet/459/head -> origin/gh/malfet/459/head 2025-08-14T21:24:06.0652484Z * [new branch] gh/malfet/459/orig -> origin/gh/malfet/459/orig 2025-08-14T21:24:06.0652617Z * [new branch] gh/malfet/460/base -> origin/gh/malfet/460/base 2025-08-14T21:24:06.0652759Z * [new branch] gh/malfet/460/head -> origin/gh/malfet/460/head 2025-08-14T21:24:06.0652968Z * [new branch] gh/malfet/460/orig -> origin/gh/malfet/460/orig 2025-08-14T21:24:06.0653101Z * [new branch] gh/malfet/461/base -> origin/gh/malfet/461/base 2025-08-14T21:24:06.0653244Z * [new branch] gh/malfet/461/head -> origin/gh/malfet/461/head 2025-08-14T21:24:06.0653373Z * [new branch] gh/malfet/461/orig -> origin/gh/malfet/461/orig 2025-08-14T21:24:06.0653511Z * [new branch] gh/malfet/462/base -> origin/gh/malfet/462/base 2025-08-14T21:24:06.0653643Z * [new branch] gh/malfet/462/head -> origin/gh/malfet/462/head 2025-08-14T21:24:06.0653775Z * [new branch] gh/malfet/462/orig -> origin/gh/malfet/462/orig 2025-08-14T21:24:06.0653915Z * [new branch] gh/malfet/463/base -> origin/gh/malfet/463/base 2025-08-14T21:24:06.0654047Z * [new branch] gh/malfet/463/head -> origin/gh/malfet/463/head 2025-08-14T21:24:06.0654184Z * [new branch] gh/malfet/463/orig -> origin/gh/malfet/463/orig 2025-08-14T21:24:06.0654325Z * [new branch] gh/malfet/464/base -> origin/gh/malfet/464/base 2025-08-14T21:24:06.0654456Z * [new branch] gh/malfet/464/head -> origin/gh/malfet/464/head 2025-08-14T21:24:06.0654603Z * [new branch] gh/malfet/464/orig -> origin/gh/malfet/464/orig 2025-08-14T21:24:06.0654774Z * [new branch] gh/malfet/465/base -> origin/gh/malfet/465/base 2025-08-14T21:24:06.0655422Z * [new branch] gh/malfet/465/head -> origin/gh/malfet/465/head 2025-08-14T21:24:06.0655634Z * [new branch] gh/malfet/465/orig -> origin/gh/malfet/465/orig 2025-08-14T21:24:06.0655798Z * [new branch] gh/malfet/466/base -> origin/gh/malfet/466/base 2025-08-14T21:24:06.0655960Z * [new branch] gh/malfet/466/head -> origin/gh/malfet/466/head 2025-08-14T21:24:06.0656110Z * [new branch] gh/malfet/466/orig -> origin/gh/malfet/466/orig 2025-08-14T21:24:06.0656262Z * [new branch] gh/malfet/467/base -> origin/gh/malfet/467/base 2025-08-14T21:24:06.0656813Z * [new branch] gh/malfet/467/head -> origin/gh/malfet/467/head 2025-08-14T21:24:06.0657638Z * [new branch] gh/malfet/467/orig -> origin/gh/malfet/467/orig 2025-08-14T21:24:06.0658691Z * [new branch] gh/malfet/468/base -> origin/gh/malfet/468/base 2025-08-14T21:24:06.0659021Z * [new branch] gh/malfet/468/head -> origin/gh/malfet/468/head 2025-08-14T21:24:06.0660209Z * [new branch] gh/malfet/468/orig -> origin/gh/malfet/468/orig 2025-08-14T21:24:06.0664600Z * [new branch] gh/malfet/469/base -> origin/gh/malfet/469/base 2025-08-14T21:24:06.0664945Z * [new branch] gh/malfet/469/head -> origin/gh/malfet/469/head 2025-08-14T21:24:06.0665296Z * [new branch] gh/malfet/469/orig -> origin/gh/malfet/469/orig 2025-08-14T21:24:06.0665462Z * [new branch] gh/malfet/470/base -> origin/gh/malfet/470/base 2025-08-14T21:24:06.0665614Z * [new branch] gh/malfet/470/head -> origin/gh/malfet/470/head 2025-08-14T21:24:06.0665896Z * [new branch] gh/malfet/470/orig -> origin/gh/malfet/470/orig 2025-08-14T21:24:06.0666044Z * [new branch] gh/malfet/471/base -> origin/gh/malfet/471/base 2025-08-14T21:24:06.0666648Z * [new branch] gh/malfet/471/head -> origin/gh/malfet/471/head 2025-08-14T21:24:06.0667020Z * [new branch] gh/malfet/471/orig -> origin/gh/malfet/471/orig 2025-08-14T21:24:06.0671335Z * [new branch] gh/malfet/472/base -> origin/gh/malfet/472/base 2025-08-14T21:24:06.0671862Z * [new branch] gh/malfet/472/head -> origin/gh/malfet/472/head 2025-08-14T21:24:06.0672071Z * [new branch] gh/malfet/472/orig -> origin/gh/malfet/472/orig 2025-08-14T21:24:06.0672295Z * [new branch] gh/malfet/473/base -> origin/gh/malfet/473/base 2025-08-14T21:24:06.0672449Z * [new branch] gh/malfet/473/head -> origin/gh/malfet/473/head 2025-08-14T21:24:06.0672590Z * [new branch] gh/malfet/473/orig -> origin/gh/malfet/473/orig 2025-08-14T21:24:06.0672737Z * [new branch] gh/malfet/474/base -> origin/gh/malfet/474/base 2025-08-14T21:24:06.0672906Z * [new branch] gh/malfet/474/head -> origin/gh/malfet/474/head 2025-08-14T21:24:06.0673387Z * [new branch] gh/malfet/474/orig -> origin/gh/malfet/474/orig 2025-08-14T21:24:06.0676150Z * [new branch] gh/malfet/475/base -> origin/gh/malfet/475/base 2025-08-14T21:24:06.0676508Z * [new branch] gh/malfet/475/head -> origin/gh/malfet/475/head 2025-08-14T21:24:06.0676701Z * [new branch] gh/malfet/475/orig -> origin/gh/malfet/475/orig 2025-08-14T21:24:06.0677032Z * [new branch] gh/malfet/476/base -> origin/gh/malfet/476/base 2025-08-14T21:24:06.0677231Z * [new branch] gh/malfet/476/head -> origin/gh/malfet/476/head 2025-08-14T21:24:06.0678243Z * [new branch] gh/malfet/476/orig -> origin/gh/malfet/476/orig 2025-08-14T21:24:06.0681209Z * [new branch] gh/malfet/477/base -> origin/gh/malfet/477/base 2025-08-14T21:24:06.0681497Z * [new branch] gh/malfet/477/head -> origin/gh/malfet/477/head 2025-08-14T21:24:06.0684701Z * [new branch] gh/malfet/477/orig -> origin/gh/malfet/477/orig 2025-08-14T21:24:06.0685017Z * [new branch] gh/malfet/478/base -> origin/gh/malfet/478/base 2025-08-14T21:24:06.0685213Z * [new branch] gh/malfet/478/head -> origin/gh/malfet/478/head 2025-08-14T21:24:06.0685459Z * [new branch] gh/malfet/478/orig -> origin/gh/malfet/478/orig 2025-08-14T21:24:06.0685627Z * [new branch] gh/malfet/479/base -> origin/gh/malfet/479/base 2025-08-14T21:24:06.0685845Z * [new branch] gh/malfet/479/head -> origin/gh/malfet/479/head 2025-08-14T21:24:06.0686002Z * [new branch] gh/malfet/479/orig -> origin/gh/malfet/479/orig 2025-08-14T21:24:06.0686219Z * [new branch] gh/malfet/480/base -> origin/gh/malfet/480/base 2025-08-14T21:24:06.0686931Z * [new branch] gh/malfet/480/head -> origin/gh/malfet/480/head 2025-08-14T21:24:06.0687280Z * [new branch] gh/malfet/480/orig -> origin/gh/malfet/480/orig 2025-08-14T21:24:06.0688565Z * [new branch] gh/malfet/481/base -> origin/gh/malfet/481/base 2025-08-14T21:24:06.0689098Z * [new branch] gh/malfet/481/head -> origin/gh/malfet/481/head 2025-08-14T21:24:06.0689320Z * [new branch] gh/malfet/481/orig -> origin/gh/malfet/481/orig 2025-08-14T21:24:06.0691587Z * [new branch] gh/malfet/482/base -> origin/gh/malfet/482/base 2025-08-14T21:24:06.0691760Z * [new branch] gh/malfet/482/head -> origin/gh/malfet/482/head 2025-08-14T21:24:06.0691912Z * [new branch] gh/malfet/482/orig -> origin/gh/malfet/482/orig 2025-08-14T21:24:06.0692790Z * [new branch] gh/malfet/483/base -> origin/gh/malfet/483/base 2025-08-14T21:24:06.0693186Z * [new branch] gh/malfet/483/head -> origin/gh/malfet/483/head 2025-08-14T21:24:06.0695350Z * [new branch] gh/malfet/483/orig -> origin/gh/malfet/483/orig 2025-08-14T21:24:06.0695523Z * [new branch] gh/malfet/484/base -> origin/gh/malfet/484/base 2025-08-14T21:24:06.0695901Z * [new branch] gh/malfet/484/head -> origin/gh/malfet/484/head 2025-08-14T21:24:06.0696837Z * [new branch] gh/malfet/484/orig -> origin/gh/malfet/484/orig 2025-08-14T21:24:06.0697942Z * [new branch] gh/malfet/485/base -> origin/gh/malfet/485/base 2025-08-14T21:24:06.0698212Z * [new branch] gh/malfet/485/head -> origin/gh/malfet/485/head 2025-08-14T21:24:06.0699339Z * [new branch] gh/malfet/485/orig -> origin/gh/malfet/485/orig 2025-08-14T21:24:06.0700036Z * [new branch] gh/malfet/486/base -> origin/gh/malfet/486/base 2025-08-14T21:24:06.0700563Z * [new branch] gh/malfet/486/head -> origin/gh/malfet/486/head 2025-08-14T21:24:06.0703338Z * [new branch] gh/malfet/486/orig -> origin/gh/malfet/486/orig 2025-08-14T21:24:06.0703639Z * [new branch] gh/malfet/487/base -> origin/gh/malfet/487/base 2025-08-14T21:24:06.0703839Z * [new branch] gh/malfet/487/head -> origin/gh/malfet/487/head 2025-08-14T21:24:06.0704096Z * [new branch] gh/malfet/487/orig -> origin/gh/malfet/487/orig 2025-08-14T21:24:06.0704577Z * [new branch] gh/malfet/488/base -> origin/gh/malfet/488/base 2025-08-14T21:24:06.0705033Z * [new branch] gh/malfet/488/head -> origin/gh/malfet/488/head 2025-08-14T21:24:06.0709450Z * [new branch] gh/malfet/488/orig -> origin/gh/malfet/488/orig 2025-08-14T21:24:06.0711891Z * [new branch] gh/malfet/489/base -> origin/gh/malfet/489/base 2025-08-14T21:24:06.0712220Z * [new branch] gh/malfet/489/head -> origin/gh/malfet/489/head 2025-08-14T21:24:06.0712407Z * [new branch] gh/malfet/489/orig -> origin/gh/malfet/489/orig 2025-08-14T21:24:06.0712546Z * [new branch] gh/malfet/490/base -> origin/gh/malfet/490/base 2025-08-14T21:24:06.0712703Z * [new branch] gh/malfet/490/head -> origin/gh/malfet/490/head 2025-08-14T21:24:06.0712846Z * [new branch] gh/malfet/490/orig -> origin/gh/malfet/490/orig 2025-08-14T21:24:06.0712990Z * [new branch] gh/malfet/64/base -> origin/gh/malfet/64/base 2025-08-14T21:24:06.0713269Z * [new branch] gh/malfet/64/head -> origin/gh/malfet/64/head 2025-08-14T21:24:06.0713979Z * [new branch] gh/manuelcandales/10/base -> origin/gh/manuelcandales/10/base 2025-08-14T21:24:06.0714327Z * [new branch] gh/manuelcandales/10/head -> origin/gh/manuelcandales/10/head 2025-08-14T21:24:06.0714507Z * [new branch] gh/manuelcandales/10/orig -> origin/gh/manuelcandales/10/orig 2025-08-14T21:24:06.0716136Z * [new branch] gh/manuelcandales/9/base -> origin/gh/manuelcandales/9/base 2025-08-14T21:24:06.0716500Z * [new branch] gh/manuelcandales/9/head -> origin/gh/manuelcandales/9/head 2025-08-14T21:24:06.0716836Z * [new branch] gh/manuelcandales/9/orig -> origin/gh/manuelcandales/9/orig 2025-08-14T21:24:06.0718411Z * [new branch] gh/markkm/1/base -> origin/gh/markkm/1/base 2025-08-14T21:24:06.0723232Z * [new branch] gh/masnesral/204/base -> origin/gh/masnesral/204/base 2025-08-14T21:24:06.0723424Z * [new branch] gh/masnesral/204/head -> origin/gh/masnesral/204/head 2025-08-14T21:24:06.0723569Z * [new branch] gh/masnesral/204/orig -> origin/gh/masnesral/204/orig 2025-08-14T21:24:06.0723721Z * [new branch] gh/masnesral/223/base -> origin/gh/masnesral/223/base 2025-08-14T21:24:06.0723867Z * [new branch] gh/masnesral/223/head -> origin/gh/masnesral/223/head 2025-08-14T21:24:06.0724015Z * [new branch] gh/masnesral/223/orig -> origin/gh/masnesral/223/orig 2025-08-14T21:24:06.0725531Z * [new branch] gh/masnesral/224/base -> origin/gh/masnesral/224/base 2025-08-14T21:24:06.0729594Z * [new branch] gh/masnesral/224/head -> origin/gh/masnesral/224/head 2025-08-14T21:24:06.0729870Z * [new branch] gh/masnesral/224/orig -> origin/gh/masnesral/224/orig 2025-08-14T21:24:06.0735603Z * [new branch] gh/masnesral/225/base -> origin/gh/masnesral/225/base 2025-08-14T21:24:06.0735939Z * [new branch] gh/masnesral/225/head -> origin/gh/masnesral/225/head 2025-08-14T21:24:06.0736142Z * [new branch] gh/masnesral/225/orig -> origin/gh/masnesral/225/orig 2025-08-14T21:24:06.0736326Z * [new branch] gh/masnesral/226/base -> origin/gh/masnesral/226/base 2025-08-14T21:24:06.0736471Z * [new branch] gh/masnesral/226/head -> origin/gh/masnesral/226/head 2025-08-14T21:24:06.0736626Z * [new branch] gh/masnesral/226/orig -> origin/gh/masnesral/226/orig 2025-08-14T21:24:06.0736792Z * [new branch] gh/masnesral/227/base -> origin/gh/masnesral/227/base 2025-08-14T21:24:06.0736933Z * [new branch] gh/masnesral/227/head -> origin/gh/masnesral/227/head 2025-08-14T21:24:06.0737085Z * [new branch] gh/masnesral/227/orig -> origin/gh/masnesral/227/orig 2025-08-14T21:24:06.0737230Z * [new branch] gh/masnesral/228/base -> origin/gh/masnesral/228/base 2025-08-14T21:24:06.0737391Z * [new branch] gh/masnesral/228/head -> origin/gh/masnesral/228/head 2025-08-14T21:24:06.0737535Z * [new branch] gh/masnesral/228/orig -> origin/gh/masnesral/228/orig 2025-08-14T21:24:06.0737680Z * [new branch] gh/masnesral/229/base -> origin/gh/masnesral/229/base 2025-08-14T21:24:06.0737834Z * [new branch] gh/masnesral/229/head -> origin/gh/masnesral/229/head 2025-08-14T21:24:06.0737986Z * [new branch] gh/masnesral/229/orig -> origin/gh/masnesral/229/orig 2025-08-14T21:24:06.0738384Z * [new branch] gh/masnesral/230/base -> origin/gh/masnesral/230/base 2025-08-14T21:24:06.0739252Z * [new branch] gh/masnesral/230/head -> origin/gh/masnesral/230/head 2025-08-14T21:24:06.0740729Z * [new branch] gh/masnesral/230/orig -> origin/gh/masnesral/230/orig 2025-08-14T21:24:06.0741000Z * [new branch] gh/masnesral/231/base -> origin/gh/masnesral/231/base 2025-08-14T21:24:06.0743899Z * [new branch] gh/masnesral/231/head -> origin/gh/masnesral/231/head 2025-08-14T21:24:06.0744232Z * [new branch] gh/masnesral/231/orig -> origin/gh/masnesral/231/orig 2025-08-14T21:24:06.0744473Z * [new branch] gh/masnesral/232/base -> origin/gh/masnesral/232/base 2025-08-14T21:24:06.0748678Z * [new branch] gh/masnesral/232/head -> origin/gh/masnesral/232/head 2025-08-14T21:24:06.0749233Z * [new branch] gh/masnesral/232/orig -> origin/gh/masnesral/232/orig 2025-08-14T21:24:06.0749518Z * [new branch] gh/masnesral/233/base -> origin/gh/masnesral/233/base 2025-08-14T21:24:06.0749681Z * [new branch] gh/masnesral/233/head -> origin/gh/masnesral/233/head 2025-08-14T21:24:06.0749820Z * [new branch] gh/masnesral/233/orig -> origin/gh/masnesral/233/orig 2025-08-14T21:24:06.0754964Z * [new branch] gh/masnesral/234/base -> origin/gh/masnesral/234/base 2025-08-14T21:24:06.0755298Z * [new branch] gh/masnesral/234/head -> origin/gh/masnesral/234/head 2025-08-14T21:24:06.0755542Z * [new branch] gh/masnesral/234/orig -> origin/gh/masnesral/234/orig 2025-08-14T21:24:06.0755726Z * [new branch] gh/masnesral/235/base -> origin/gh/masnesral/235/base 2025-08-14T21:24:06.0755917Z * [new branch] gh/masnesral/235/head -> origin/gh/masnesral/235/head 2025-08-14T21:24:06.0756405Z * [new branch] gh/masnesral/235/orig -> origin/gh/masnesral/235/orig 2025-08-14T21:24:06.0756588Z * [new branch] gh/masnesral/236/base -> origin/gh/masnesral/236/base 2025-08-14T21:24:06.0756735Z * [new branch] gh/masnesral/236/head -> origin/gh/masnesral/236/head 2025-08-14T21:24:06.0756884Z * [new branch] gh/masnesral/236/orig -> origin/gh/masnesral/236/orig 2025-08-14T21:24:06.0757043Z * [new branch] gh/masnesral/34/base -> origin/gh/masnesral/34/base 2025-08-14T21:24:06.0757204Z * [new branch] gh/mhorowitz/0/base -> origin/gh/mhorowitz/0/base 2025-08-14T21:24:06.0762723Z * [new branch] gh/mhorowitz/0/head -> origin/gh/mhorowitz/0/head 2025-08-14T21:24:06.0763067Z * [new branch] gh/mhorowitz/1/base -> origin/gh/mhorowitz/1/base 2025-08-14T21:24:06.0767647Z * [new branch] gh/mhorowitz/1/head -> origin/gh/mhorowitz/1/head 2025-08-14T21:24:06.0770013Z * [new branch] gh/mhorowitz/2/base -> origin/gh/mhorowitz/2/base 2025-08-14T21:24:06.0770296Z * [new branch] gh/mhorowitz/2/head -> origin/gh/mhorowitz/2/head 2025-08-14T21:24:06.0774810Z * [new branch] gh/mhorowitz/3/base -> origin/gh/mhorowitz/3/base 2025-08-14T21:24:06.0775101Z * [new branch] gh/mhorowitz/3/head -> origin/gh/mhorowitz/3/head 2025-08-14T21:24:06.0780750Z * [new branch] gh/mhorowitz/4/base -> origin/gh/mhorowitz/4/base 2025-08-14T21:24:06.0785277Z * [new branch] gh/mhorowitz/4/head -> origin/gh/mhorowitz/4/head 2025-08-14T21:24:06.0785631Z * [new branch] gh/mhorowitz/5/base -> origin/gh/mhorowitz/5/base 2025-08-14T21:24:06.0785829Z * [new branch] gh/mhorowitz/5/head -> origin/gh/mhorowitz/5/head 2025-08-14T21:24:06.0786028Z * [new branch] gh/mhorowitz/6/base -> origin/gh/mhorowitz/6/base 2025-08-14T21:24:06.0786258Z * [new branch] gh/mhorowitz/6/head -> origin/gh/mhorowitz/6/head 2025-08-14T21:24:06.0786498Z * [new branch] gh/mikaylagawarecki/234/base -> origin/gh/mikaylagawarecki/234/base 2025-08-14T21:24:06.0787198Z * [new branch] gh/mikaylagawarecki/234/head -> origin/gh/mikaylagawarecki/234/head 2025-08-14T21:24:06.0787424Z * [new branch] gh/mikaylagawarecki/235/base -> origin/gh/mikaylagawarecki/235/base 2025-08-14T21:24:06.0787611Z * [new branch] gh/mikaylagawarecki/235/head -> origin/gh/mikaylagawarecki/235/head 2025-08-14T21:24:06.0787783Z * [new branch] gh/mikaylagawarecki/236/base -> origin/gh/mikaylagawarecki/236/base 2025-08-14T21:24:06.0787955Z * [new branch] gh/mikaylagawarecki/236/head -> origin/gh/mikaylagawarecki/236/head 2025-08-14T21:24:06.0788133Z * [new branch] gh/mikaylagawarecki/237/base -> origin/gh/mikaylagawarecki/237/base 2025-08-14T21:24:06.0788489Z * [new branch] gh/mikaylagawarecki/237/head -> origin/gh/mikaylagawarecki/237/head 2025-08-14T21:24:06.0788671Z * [new branch] gh/mikaylagawarecki/238/base -> origin/gh/mikaylagawarecki/238/base 2025-08-14T21:24:06.0788844Z * [new branch] gh/mikaylagawarecki/238/head -> origin/gh/mikaylagawarecki/238/head 2025-08-14T21:24:06.0789009Z * [new branch] gh/mikaylagawarecki/313/base -> origin/gh/mikaylagawarecki/313/base 2025-08-14T21:24:06.0789185Z * [new branch] gh/mikaylagawarecki/313/head -> origin/gh/mikaylagawarecki/313/head 2025-08-14T21:24:06.0789352Z * [new branch] gh/mikaylagawarecki/313/orig -> origin/gh/mikaylagawarecki/313/orig 2025-08-14T21:24:06.0789528Z * [new branch] gh/mikaylagawarecki/317/base -> origin/gh/mikaylagawarecki/317/base 2025-08-14T21:24:06.0789697Z * [new branch] gh/mikaylagawarecki/317/head -> origin/gh/mikaylagawarecki/317/head 2025-08-14T21:24:06.0789919Z * [new branch] gh/mikaylagawarecki/317/orig -> origin/gh/mikaylagawarecki/317/orig 2025-08-14T21:24:06.0790092Z * [new branch] gh/mikaylagawarecki/318/base -> origin/gh/mikaylagawarecki/318/base 2025-08-14T21:24:06.0790256Z * [new branch] gh/mikaylagawarecki/318/head -> origin/gh/mikaylagawarecki/318/head 2025-08-14T21:24:06.0790429Z * [new branch] gh/mikaylagawarecki/318/orig -> origin/gh/mikaylagawarecki/318/orig 2025-08-14T21:24:06.0790595Z * [new branch] gh/mikaylagawarecki/319/base -> origin/gh/mikaylagawarecki/319/base 2025-08-14T21:24:06.0790762Z * [new branch] gh/mikaylagawarecki/319/head -> origin/gh/mikaylagawarecki/319/head 2025-08-14T21:24:06.0790935Z * [new branch] gh/mikaylagawarecki/319/orig -> origin/gh/mikaylagawarecki/319/orig 2025-08-14T21:24:06.0791101Z * [new branch] gh/mikaylagawarecki/320/base -> origin/gh/mikaylagawarecki/320/base 2025-08-14T21:24:06.0791280Z * [new branch] gh/mikaylagawarecki/320/head -> origin/gh/mikaylagawarecki/320/head 2025-08-14T21:24:06.0791447Z * [new branch] gh/mikaylagawarecki/320/orig -> origin/gh/mikaylagawarecki/320/orig 2025-08-14T21:24:06.0791613Z * [new branch] gh/mikaylagawarecki/321/base -> origin/gh/mikaylagawarecki/321/base 2025-08-14T21:24:06.0791786Z * [new branch] gh/mikaylagawarecki/321/head -> origin/gh/mikaylagawarecki/321/head 2025-08-14T21:24:06.0791953Z * [new branch] gh/mikaylagawarecki/321/orig -> origin/gh/mikaylagawarecki/321/orig 2025-08-14T21:24:06.0792127Z * [new branch] gh/mikaylagawarecki/322/base -> origin/gh/mikaylagawarecki/322/base 2025-08-14T21:24:06.0792300Z * [new branch] gh/mikaylagawarecki/322/head -> origin/gh/mikaylagawarecki/322/head 2025-08-14T21:24:06.0792466Z * [new branch] gh/mikaylagawarecki/322/orig -> origin/gh/mikaylagawarecki/322/orig 2025-08-14T21:24:06.0792650Z * [new branch] gh/mikaylagawarecki/323/base -> origin/gh/mikaylagawarecki/323/base 2025-08-14T21:24:06.0792820Z * [new branch] gh/mikaylagawarecki/323/head -> origin/gh/mikaylagawarecki/323/head 2025-08-14T21:24:06.0792995Z * [new branch] gh/mikaylagawarecki/323/orig -> origin/gh/mikaylagawarecki/323/orig 2025-08-14T21:24:06.0793159Z * [new branch] gh/mikaylagawarecki/324/base -> origin/gh/mikaylagawarecki/324/base 2025-08-14T21:24:06.0793330Z * [new branch] gh/mikaylagawarecki/324/head -> origin/gh/mikaylagawarecki/324/head 2025-08-14T21:24:06.0793507Z * [new branch] gh/mikaylagawarecki/324/orig -> origin/gh/mikaylagawarecki/324/orig 2025-08-14T21:24:06.0793893Z * [new branch] gh/mikaylagawarecki/325/base -> origin/gh/mikaylagawarecki/325/base 2025-08-14T21:24:06.0795141Z * [new branch] gh/mikaylagawarecki/325/head -> origin/gh/mikaylagawarecki/325/head 2025-08-14T21:24:06.0795506Z * [new branch] gh/mikaylagawarecki/325/orig -> origin/gh/mikaylagawarecki/325/orig 2025-08-14T21:24:06.0796058Z * [new branch] gh/mikaylagawarecki/326/base -> origin/gh/mikaylagawarecki/326/base 2025-08-14T21:24:06.0797300Z * [new branch] gh/mikaylagawarecki/326/head -> origin/gh/mikaylagawarecki/326/head 2025-08-14T21:24:06.0797616Z * [new branch] gh/mikaylagawarecki/326/orig -> origin/gh/mikaylagawarecki/326/orig 2025-08-14T21:24:06.0799961Z * [new branch] gh/mikaylagawarecki/327/base -> origin/gh/mikaylagawarecki/327/base 2025-08-14T21:24:06.0800330Z * [new branch] gh/mikaylagawarecki/327/head -> origin/gh/mikaylagawarecki/327/head 2025-08-14T21:24:06.0800562Z * [new branch] gh/mikaylagawarecki/327/orig -> origin/gh/mikaylagawarecki/327/orig 2025-08-14T21:24:06.0802182Z * [new branch] gh/mikaylagawarecki/328/base -> origin/gh/mikaylagawarecki/328/base 2025-08-14T21:24:06.0802728Z * [new branch] gh/mikaylagawarecki/328/head -> origin/gh/mikaylagawarecki/328/head 2025-08-14T21:24:06.0802998Z * [new branch] gh/mikaylagawarecki/328/orig -> origin/gh/mikaylagawarecki/328/orig 2025-08-14T21:24:06.0805897Z * [new branch] gh/mikaylagawarecki/329/base -> origin/gh/mikaylagawarecki/329/base 2025-08-14T21:24:06.0810431Z * [new branch] gh/mikaylagawarecki/329/head -> origin/gh/mikaylagawarecki/329/head 2025-08-14T21:24:06.0814938Z * [new branch] gh/mikaylagawarecki/329/orig -> origin/gh/mikaylagawarecki/329/orig 2025-08-14T21:24:06.0815340Z * [new branch] gh/mikaylagawarecki/330/base -> origin/gh/mikaylagawarecki/330/base 2025-08-14T21:24:06.0815541Z * [new branch] gh/mikaylagawarecki/330/head -> origin/gh/mikaylagawarecki/330/head 2025-08-14T21:24:06.0815732Z * [new branch] gh/mikaylagawarecki/330/orig -> origin/gh/mikaylagawarecki/330/orig 2025-08-14T21:24:06.0815936Z * [new branch] gh/mikaylagawarecki/331/base -> origin/gh/mikaylagawarecki/331/base 2025-08-14T21:24:06.0816122Z * [new branch] gh/mikaylagawarecki/331/head -> origin/gh/mikaylagawarecki/331/head 2025-08-14T21:24:06.0816306Z * [new branch] gh/mikaylagawarecki/331/orig -> origin/gh/mikaylagawarecki/331/orig 2025-08-14T21:24:06.0816480Z * [new branch] gh/mikaylagawarecki/332/base -> origin/gh/mikaylagawarecki/332/base 2025-08-14T21:24:06.0816652Z * [new branch] gh/mikaylagawarecki/332/head -> origin/gh/mikaylagawarecki/332/head 2025-08-14T21:24:06.0816830Z * [new branch] gh/mikaylagawarecki/332/orig -> origin/gh/mikaylagawarecki/332/orig 2025-08-14T21:24:06.0817000Z * [new branch] gh/mikaylagawarecki/333/base -> origin/gh/mikaylagawarecki/333/base 2025-08-14T21:24:06.0817179Z * [new branch] gh/mikaylagawarecki/333/head -> origin/gh/mikaylagawarecki/333/head 2025-08-14T21:24:06.0817353Z * [new branch] gh/mikaylagawarecki/333/orig -> origin/gh/mikaylagawarecki/333/orig 2025-08-14T21:24:06.0818706Z * [new branch] gh/mikaylagawarecki/334/base -> origin/gh/mikaylagawarecki/334/base 2025-08-14T21:24:06.0819169Z * [new branch] gh/mikaylagawarecki/334/head -> origin/gh/mikaylagawarecki/334/head 2025-08-14T21:24:06.0819374Z * [new branch] gh/mikaylagawarecki/334/orig -> origin/gh/mikaylagawarecki/334/orig 2025-08-14T21:24:06.0819547Z * [new branch] gh/mlazos/1/base -> origin/gh/mlazos/1/base 2025-08-14T21:24:06.0819793Z * [new branch] gh/mlazos/1/head -> origin/gh/mlazos/1/head 2025-08-14T21:24:06.0819983Z * [new branch] gh/mlazos/1/orig -> origin/gh/mlazos/1/orig 2025-08-14T21:24:06.0828066Z * [new branch] gh/mlazos/10/base -> origin/gh/mlazos/10/base 2025-08-14T21:24:06.0828422Z * [new branch] gh/mlazos/10/head -> origin/gh/mlazos/10/head 2025-08-14T21:24:06.0828764Z * [new branch] gh/mlazos/10/orig -> origin/gh/mlazos/10/orig 2025-08-14T21:24:06.0828916Z * [new branch] gh/mlazos/11/base -> origin/gh/mlazos/11/base 2025-08-14T21:24:06.0829081Z * [new branch] gh/mlazos/11/head -> origin/gh/mlazos/11/head 2025-08-14T21:24:06.0829231Z * [new branch] gh/mlazos/11/orig -> origin/gh/mlazos/11/orig 2025-08-14T21:24:06.0829377Z * [new branch] gh/mlazos/12/base -> origin/gh/mlazos/12/base 2025-08-14T21:24:06.0829512Z * [new branch] gh/mlazos/12/head -> origin/gh/mlazos/12/head 2025-08-14T21:24:06.0829656Z * [new branch] gh/mlazos/12/orig -> origin/gh/mlazos/12/orig 2025-08-14T21:24:06.0829788Z * [new branch] gh/mlazos/13/base -> origin/gh/mlazos/13/base 2025-08-14T21:24:06.0829930Z * [new branch] gh/mlazos/13/head -> origin/gh/mlazos/13/head 2025-08-14T21:24:06.0830146Z * [new branch] gh/mlazos/13/orig -> origin/gh/mlazos/13/orig 2025-08-14T21:24:06.0836844Z * [new branch] gh/mlazos/2/base -> origin/gh/mlazos/2/base 2025-08-14T21:24:06.0839691Z * [new branch] gh/mlazos/2/head -> origin/gh/mlazos/2/head 2025-08-14T21:24:06.0839976Z * [new branch] gh/mlazos/2/orig -> origin/gh/mlazos/2/orig 2025-08-14T21:24:06.0843404Z * [new branch] gh/mlazos/3/base -> origin/gh/mlazos/3/base 2025-08-14T21:24:06.0843702Z * [new branch] gh/mlazos/3/head -> origin/gh/mlazos/3/head 2025-08-14T21:24:06.0843968Z * [new branch] gh/mlazos/3/orig -> origin/gh/mlazos/3/orig 2025-08-14T21:24:06.0844105Z * [new branch] gh/mlazos/4/base -> origin/gh/mlazos/4/base 2025-08-14T21:24:06.0844251Z * [new branch] gh/mlazos/4/head -> origin/gh/mlazos/4/head 2025-08-14T21:24:06.0844408Z * [new branch] gh/mlazos/4/orig -> origin/gh/mlazos/4/orig 2025-08-14T21:24:06.0844555Z * [new branch] gh/mlazos/5/base -> origin/gh/mlazos/5/base 2025-08-14T21:24:06.0844774Z * [new branch] gh/mlazos/5/head -> origin/gh/mlazos/5/head 2025-08-14T21:24:06.0847456Z * [new branch] gh/mlazos/5/orig -> origin/gh/mlazos/5/orig 2025-08-14T21:24:06.0847633Z * [new branch] gh/mlazos/6/base -> origin/gh/mlazos/6/base 2025-08-14T21:24:06.0847780Z * [new branch] gh/mlazos/6/head -> origin/gh/mlazos/6/head 2025-08-14T21:24:06.0848049Z * [new branch] gh/mlazos/6/orig -> origin/gh/mlazos/6/orig 2025-08-14T21:24:06.0848197Z * [new branch] gh/mlazos/7/base -> origin/gh/mlazos/7/base 2025-08-14T21:24:06.0848418Z * [new branch] gh/mlazos/7/head -> origin/gh/mlazos/7/head 2025-08-14T21:24:06.0852461Z * [new branch] gh/mlazos/7/orig -> origin/gh/mlazos/7/orig 2025-08-14T21:24:06.0853097Z * [new branch] gh/mlazos/8/base -> origin/gh/mlazos/8/base 2025-08-14T21:24:06.0853289Z * [new branch] gh/mlazos/8/head -> origin/gh/mlazos/8/head 2025-08-14T21:24:06.0853432Z * [new branch] gh/mlazos/8/orig -> origin/gh/mlazos/8/orig 2025-08-14T21:24:06.0853785Z * [new branch] gh/mlazos/9/base -> origin/gh/mlazos/9/base 2025-08-14T21:24:06.0853923Z * [new branch] gh/mlazos/9/head -> origin/gh/mlazos/9/head 2025-08-14T21:24:06.0854206Z * [new branch] gh/mlazos/9/orig -> origin/gh/mlazos/9/orig 2025-08-14T21:24:06.0854385Z * [new branch] gh/mrmiywj/1/base -> origin/gh/mrmiywj/1/base 2025-08-14T21:24:06.0854528Z * [new branch] gh/mrmiywj/1/head -> origin/gh/mrmiywj/1/head 2025-08-14T21:24:06.0858076Z * [new branch] gh/muchulee8/62/base -> origin/gh/muchulee8/62/base 2025-08-14T21:24:06.0858521Z * [new branch] gh/muchulee8/62/head -> origin/gh/muchulee8/62/head 2025-08-14T21:24:06.0858690Z * [new branch] gh/muchulee8/62/orig -> origin/gh/muchulee8/62/orig 2025-08-14T21:24:06.0858857Z * [new branch] gh/muchulee8/63/base -> origin/gh/muchulee8/63/base 2025-08-14T21:24:06.0859048Z * [new branch] gh/muchulee8/63/head -> origin/gh/muchulee8/63/head 2025-08-14T21:24:06.0859907Z * [new branch] gh/muchulee8/63/orig -> origin/gh/muchulee8/63/orig 2025-08-14T21:24:06.0867500Z * [new branch] gh/muchulee8/64/base -> origin/gh/muchulee8/64/base 2025-08-14T21:24:06.0872387Z * [new branch] gh/muchulee8/64/head -> origin/gh/muchulee8/64/head 2025-08-14T21:24:06.0876689Z * [new branch] gh/muchulee8/64/orig -> origin/gh/muchulee8/64/orig 2025-08-14T21:24:06.0881111Z * [new branch] gh/muchulee8/65/base -> origin/gh/muchulee8/65/base 2025-08-14T21:24:06.0885001Z * [new branch] gh/muchulee8/65/head -> origin/gh/muchulee8/65/head 2025-08-14T21:24:06.0888572Z * [new branch] gh/muchulee8/65/orig -> origin/gh/muchulee8/65/orig 2025-08-14T21:24:06.0892764Z * [new branch] gh/oulgen/35/base -> origin/gh/oulgen/35/base 2025-08-14T21:24:06.0894943Z * [new branch] gh/oulgen/35/head -> origin/gh/oulgen/35/head 2025-08-14T21:24:06.0895121Z * [new branch] gh/oulgen/35/orig -> origin/gh/oulgen/35/orig 2025-08-14T21:24:06.0895258Z * [new branch] gh/oulgen/44/base -> origin/gh/oulgen/44/base 2025-08-14T21:24:06.0895393Z * [new branch] gh/oulgen/44/head -> origin/gh/oulgen/44/head 2025-08-14T21:24:06.0895538Z * [new branch] gh/oulgen/44/orig -> origin/gh/oulgen/44/orig 2025-08-14T21:24:06.0895696Z * [new branch] gh/oulgen/45/base -> origin/gh/oulgen/45/base 2025-08-14T21:24:06.0895829Z * [new branch] gh/oulgen/45/head -> origin/gh/oulgen/45/head 2025-08-14T21:24:06.0895972Z * [new branch] gh/oulgen/45/orig -> origin/gh/oulgen/45/orig 2025-08-14T21:24:06.0896104Z * [new branch] gh/oulgen/46/base -> origin/gh/oulgen/46/base 2025-08-14T21:24:06.0896244Z * [new branch] gh/oulgen/46/head -> origin/gh/oulgen/46/head 2025-08-14T21:24:06.0896377Z * [new branch] gh/oulgen/46/orig -> origin/gh/oulgen/46/orig 2025-08-14T21:24:06.0896509Z * [new branch] gh/oulgen/47/base -> origin/gh/oulgen/47/base 2025-08-14T21:24:06.0896651Z * [new branch] gh/oulgen/47/head -> origin/gh/oulgen/47/head 2025-08-14T21:24:06.0896783Z * [new branch] gh/oulgen/47/orig -> origin/gh/oulgen/47/orig 2025-08-14T21:24:06.0896941Z * [new branch] gh/pearu/108/base -> origin/gh/pearu/108/base 2025-08-14T21:24:06.0897079Z * [new branch] gh/pearu/108/head -> origin/gh/pearu/108/head 2025-08-14T21:24:06.0897212Z * [new branch] gh/pearu/108/orig -> origin/gh/pearu/108/orig 2025-08-14T21:24:06.0897361Z * [new branch] gh/pearu/56/base -> origin/gh/pearu/56/base 2025-08-14T21:24:06.0897498Z * [new branch] gh/pearu/56/head -> origin/gh/pearu/56/head 2025-08-14T21:24:06.0897630Z * [new branch] gh/pearu/56/orig -> origin/gh/pearu/56/orig 2025-08-14T21:24:06.0897771Z * [new branch] gh/pearu/97/base -> origin/gh/pearu/97/base 2025-08-14T21:24:06.0897902Z * [new branch] gh/pearu/97/head -> origin/gh/pearu/97/head 2025-08-14T21:24:06.0898038Z * [new branch] gh/pearu/97/orig -> origin/gh/pearu/97/orig 2025-08-14T21:24:06.0898356Z * [new branch] gh/qqaatw/29/base -> origin/gh/qqaatw/29/base 2025-08-14T21:24:06.0898510Z * [new branch] gh/qqaatw/29/head -> origin/gh/qqaatw/29/head 2025-08-14T21:24:06.0898649Z * [new branch] gh/qqaatw/29/orig -> origin/gh/qqaatw/29/orig 2025-08-14T21:24:06.0898872Z * [new branch] gh/raymo/cleanup-dynamo-logging -> origin/gh/raymo/cleanup-dynamo-logging 2025-08-14T21:24:06.0899052Z * [new branch] gh/raymo/refresh-script -> origin/gh/raymo/refresh-script 2025-08-14T21:24:06.0899193Z * [new branch] gh/rec/141/base -> origin/gh/rec/141/base 2025-08-14T21:24:06.0899327Z * [new branch] gh/rec/141/head -> origin/gh/rec/141/head 2025-08-14T21:24:06.0899462Z * [new branch] gh/rec/153/base -> origin/gh/rec/153/base 2025-08-14T21:24:06.0899590Z * [new branch] gh/rec/153/head -> origin/gh/rec/153/head 2025-08-14T21:24:06.0899950Z * [new branch] gh/rec/153/orig -> origin/gh/rec/153/orig 2025-08-14T21:24:06.0900088Z * [new branch] gh/rec/154/base -> origin/gh/rec/154/base 2025-08-14T21:24:06.0900217Z * [new branch] gh/rec/154/head -> origin/gh/rec/154/head 2025-08-14T21:24:06.0900354Z * [new branch] gh/rec/154/orig -> origin/gh/rec/154/orig 2025-08-14T21:24:06.0900483Z * [new branch] gh/rec/156/base -> origin/gh/rec/156/base 2025-08-14T21:24:06.0900618Z * [new branch] gh/rec/156/head -> origin/gh/rec/156/head 2025-08-14T21:24:06.0900745Z * [new branch] gh/rec/156/orig -> origin/gh/rec/156/orig 2025-08-14T21:24:06.0900872Z * [new branch] gh/rec/158/base -> origin/gh/rec/158/base 2025-08-14T21:24:06.0901008Z * [new branch] gh/rec/158/head -> origin/gh/rec/158/head 2025-08-14T21:24:06.0901145Z * [new branch] gh/rec/158/orig -> origin/gh/rec/158/orig 2025-08-14T21:24:06.0901278Z * [new branch] gh/rec/159/base -> origin/gh/rec/159/base 2025-08-14T21:24:06.0901405Z * [new branch] gh/rec/159/head -> origin/gh/rec/159/head 2025-08-14T21:24:06.0901538Z * [new branch] gh/rec/160/base -> origin/gh/rec/160/base 2025-08-14T21:24:06.0901710Z * [new branch] gh/rec/160/head -> origin/gh/rec/160/head 2025-08-14T21:24:06.0902761Z * [new branch] gh/rec/160/orig -> origin/gh/rec/160/orig 2025-08-14T21:24:06.0903592Z * [new branch] gh/rec/161/base -> origin/gh/rec/161/base 2025-08-14T21:24:06.0904009Z * [new branch] gh/rec/161/head -> origin/gh/rec/161/head 2025-08-14T21:24:06.0905401Z * [new branch] gh/rec/161/orig -> origin/gh/rec/161/orig 2025-08-14T21:24:06.0905568Z * [new branch] gh/rec/162/base -> origin/gh/rec/162/base 2025-08-14T21:24:06.0909831Z * [new branch] gh/rec/162/head -> origin/gh/rec/162/head 2025-08-14T21:24:06.0910017Z * [new branch] gh/rec/162/orig -> origin/gh/rec/162/orig 2025-08-14T21:24:06.0910153Z * [new branch] gh/rec/163/base -> origin/gh/rec/163/base 2025-08-14T21:24:06.0910282Z * [new branch] gh/rec/163/head -> origin/gh/rec/163/head 2025-08-14T21:24:06.0910422Z * [new branch] gh/rec/163/orig -> origin/gh/rec/163/orig 2025-08-14T21:24:06.0910557Z * [new branch] gh/rec/164/base -> origin/gh/rec/164/base 2025-08-14T21:24:06.0911039Z * [new branch] gh/rec/164/head -> origin/gh/rec/164/head 2025-08-14T21:24:06.0911635Z * [new branch] gh/rec/164/orig -> origin/gh/rec/164/orig 2025-08-14T21:24:06.0916885Z * [new branch] gh/robert-hardwick/1/base -> origin/gh/robert-hardwick/1/base 2025-08-14T21:24:06.0917224Z * [new branch] gh/robert-hardwick/1/head -> origin/gh/robert-hardwick/1/head 2025-08-14T21:24:06.0920765Z * [new branch] gh/robert-hardwick/1/orig -> origin/gh/robert-hardwick/1/orig 2025-08-14T21:24:06.0926001Z * [new branch] gh/robert-hardwick/2/base -> origin/gh/robert-hardwick/2/base 2025-08-14T21:24:06.0931862Z * [new branch] gh/robert-hardwick/2/head -> origin/gh/robert-hardwick/2/head 2025-08-14T21:24:06.0934390Z * [new branch] gh/robert-hardwick/2/orig -> origin/gh/robert-hardwick/2/orig 2025-08-14T21:24:06.0934716Z * [new branch] gh/robert-hardwick/3/base -> origin/gh/robert-hardwick/3/base 2025-08-14T21:24:06.0934917Z * [new branch] gh/robert-hardwick/3/head -> origin/gh/robert-hardwick/3/head 2025-08-14T21:24:06.0935213Z * [new branch] gh/robert-hardwick/3/orig -> origin/gh/robert-hardwick/3/orig 2025-08-14T21:24:06.0935725Z * [new branch] gh/robert-hardwick/4/base -> origin/gh/robert-hardwick/4/base 2025-08-14T21:24:06.0936493Z * [new branch] gh/robert-hardwick/4/head -> origin/gh/robert-hardwick/4/head 2025-08-14T21:24:06.0936708Z * [new branch] gh/robert-hardwick/4/orig -> origin/gh/robert-hardwick/4/orig 2025-08-14T21:24:06.0936902Z * [new branch] gh/rtimpe/1/base -> origin/gh/rtimpe/1/base 2025-08-14T21:24:06.0937095Z * [new branch] gh/rtimpe/1/head -> origin/gh/rtimpe/1/head 2025-08-14T21:24:06.0937245Z * [new branch] gh/rtimpe/10/base -> origin/gh/rtimpe/10/base 2025-08-14T21:24:06.0937394Z * [new branch] gh/rtimpe/10/head -> origin/gh/rtimpe/10/head 2025-08-14T21:24:06.0937554Z * [new branch] gh/rtimpe/10/orig -> origin/gh/rtimpe/10/orig 2025-08-14T21:24:06.0937731Z * [new branch] gh/rtimpe/11/base -> origin/gh/rtimpe/11/base 2025-08-14T21:24:06.0937914Z * [new branch] gh/rtimpe/11/head -> origin/gh/rtimpe/11/head 2025-08-14T21:24:06.0938048Z * [new branch] gh/rtimpe/11/orig -> origin/gh/rtimpe/11/orig 2025-08-14T21:24:06.0938198Z * [new branch] gh/rtimpe/12/base -> origin/gh/rtimpe/12/base 2025-08-14T21:24:06.0938352Z * [new branch] gh/rtimpe/12/head -> origin/gh/rtimpe/12/head 2025-08-14T21:24:06.0938500Z * [new branch] gh/rtimpe/12/orig -> origin/gh/rtimpe/12/orig 2025-08-14T21:24:06.0938650Z * [new branch] gh/rtimpe/2/base -> origin/gh/rtimpe/2/base 2025-08-14T21:24:06.0938794Z * [new branch] gh/rtimpe/2/head -> origin/gh/rtimpe/2/head 2025-08-14T21:24:06.0938932Z * [new branch] gh/rtimpe/3/base -> origin/gh/rtimpe/3/base 2025-08-14T21:24:06.0939080Z * [new branch] gh/rtimpe/3/head -> origin/gh/rtimpe/3/head 2025-08-14T21:24:06.0939224Z * [new branch] gh/rtimpe/4/base -> origin/gh/rtimpe/4/base 2025-08-14T21:24:06.0939355Z * [new branch] gh/rtimpe/4/head -> origin/gh/rtimpe/4/head 2025-08-14T21:24:06.0939507Z * [new branch] gh/rtimpe/5/base -> origin/gh/rtimpe/5/base 2025-08-14T21:24:06.0939652Z * [new branch] gh/rtimpe/5/head -> origin/gh/rtimpe/5/head 2025-08-14T21:24:06.0939966Z * [new branch] gh/rtimpe/5/orig -> origin/gh/rtimpe/5/orig 2025-08-14T21:24:06.0940126Z * [new branch] gh/rtimpe/6/base -> origin/gh/rtimpe/6/base 2025-08-14T21:24:06.0940266Z * [new branch] gh/rtimpe/6/head -> origin/gh/rtimpe/6/head 2025-08-14T21:24:06.0940889Z * [new branch] gh/rtimpe/6/orig -> origin/gh/rtimpe/6/orig 2025-08-14T21:24:06.0942529Z * [new branch] gh/rtimpe/7/base -> origin/gh/rtimpe/7/base 2025-08-14T21:24:06.0942903Z * [new branch] gh/rtimpe/7/head -> origin/gh/rtimpe/7/head 2025-08-14T21:24:06.0943235Z * [new branch] gh/rtimpe/7/orig -> origin/gh/rtimpe/7/orig 2025-08-14T21:24:06.0944781Z * [new branch] gh/rtimpe/8/base -> origin/gh/rtimpe/8/base 2025-08-14T21:24:06.0945122Z * [new branch] gh/rtimpe/8/head -> origin/gh/rtimpe/8/head 2025-08-14T21:24:06.0945625Z * [new branch] gh/rtimpe/8/orig -> origin/gh/rtimpe/8/orig 2025-08-14T21:24:06.0947750Z * [new branch] gh/rtimpe/9/base -> origin/gh/rtimpe/9/base 2025-08-14T21:24:06.0948086Z * [new branch] gh/rtimpe/9/head -> origin/gh/rtimpe/9/head 2025-08-14T21:24:06.0948243Z * [new branch] gh/rtimpe/9/orig -> origin/gh/rtimpe/9/orig 2025-08-14T21:24:06.0949864Z * [new branch] gh/ruisizhang123/1/base -> origin/gh/ruisizhang123/1/base 2025-08-14T21:24:06.0950360Z * [new branch] gh/ruisizhang123/1/head -> origin/gh/ruisizhang123/1/head 2025-08-14T21:24:06.0950737Z * [new branch] gh/ruisizhang123/1/orig -> origin/gh/ruisizhang123/1/orig 2025-08-14T21:24:06.0953389Z * [new branch] gh/ruisizhang123/4/base -> origin/gh/ruisizhang123/4/base 2025-08-14T21:24:06.0953727Z * [new branch] gh/ruisizhang123/4/head -> origin/gh/ruisizhang123/4/head 2025-08-14T21:24:06.0953919Z * [new branch] gh/ruisizhang123/4/orig -> origin/gh/ruisizhang123/4/orig 2025-08-14T21:24:06.0956374Z * [new branch] gh/ruisizhang123/5/base -> origin/gh/ruisizhang123/5/base 2025-08-14T21:24:06.0956722Z * [new branch] gh/ruisizhang123/5/head -> origin/gh/ruisizhang123/5/head 2025-08-14T21:24:06.0956956Z * [new branch] gh/ruisizhang123/5/orig -> origin/gh/ruisizhang123/5/orig 2025-08-14T21:24:06.0957181Z * [new branch] gh/ruisizhang123/6/base -> origin/gh/ruisizhang123/6/base 2025-08-14T21:24:06.0961269Z * [new branch] gh/ruisizhang123/6/head -> origin/gh/ruisizhang123/6/head 2025-08-14T21:24:06.0961614Z * [new branch] gh/ruisizhang123/6/orig -> origin/gh/ruisizhang123/6/orig 2025-08-14T21:24:06.0961820Z * [new branch] gh/ruisizhang123/7/base -> origin/gh/ruisizhang123/7/base 2025-08-14T21:24:06.0961993Z * [new branch] gh/ruisizhang123/7/head -> origin/gh/ruisizhang123/7/head 2025-08-14T21:24:06.0962154Z * [new branch] gh/ruisizhang123/7/orig -> origin/gh/ruisizhang123/7/orig 2025-08-14T21:24:06.0962318Z * [new branch] gh/ruisizhang123/8/base -> origin/gh/ruisizhang123/8/base 2025-08-14T21:24:06.0963024Z * [new branch] gh/ruisizhang123/8/head -> origin/gh/ruisizhang123/8/head 2025-08-14T21:24:06.0963894Z * [new branch] gh/ruisizhang123/8/orig -> origin/gh/ruisizhang123/8/orig 2025-08-14T21:24:06.0968372Z * [new branch] gh/sarckk/2/base -> origin/gh/sarckk/2/base 2025-08-14T21:24:06.0968549Z * [new branch] gh/sarckk/2/head -> origin/gh/sarckk/2/head 2025-08-14T21:24:06.0968986Z * [new branch] gh/sarckk/2/orig -> origin/gh/sarckk/2/orig 2025-08-14T21:24:06.0969188Z * [new branch] gh/seemethere/23/head -> origin/gh/seemethere/23/head 2025-08-14T21:24:06.0969343Z * [new branch] gh/seemethere/24/base -> origin/gh/seemethere/24/base 2025-08-14T21:24:06.0969500Z * [new branch] gh/seemethere/24/head -> origin/gh/seemethere/24/head 2025-08-14T21:24:06.0969936Z * [new branch] gh/seemethere/24/orig -> origin/gh/seemethere/24/orig 2025-08-14T21:24:06.0973205Z * [new branch] gh/seemethere/30/base -> origin/gh/seemethere/30/base 2025-08-14T21:24:06.0973420Z * [new branch] gh/seemethere/30/head -> origin/gh/seemethere/30/head 2025-08-14T21:24:06.0973790Z * [new branch] gh/seemethere/30/orig -> origin/gh/seemethere/30/orig 2025-08-14T21:24:06.0973957Z * [new branch] gh/seemethere/32/base -> origin/gh/seemethere/32/base 2025-08-14T21:24:06.0974310Z * [new branch] gh/seemethere/32/head -> origin/gh/seemethere/32/head 2025-08-14T21:24:06.0974490Z * [new branch] gh/seemethere/32/orig -> origin/gh/seemethere/32/orig 2025-08-14T21:24:06.0975645Z * [new branch] gh/seemethere/33/base -> origin/gh/seemethere/33/base 2025-08-14T21:24:06.0975865Z * [new branch] gh/seemethere/33/head -> origin/gh/seemethere/33/head 2025-08-14T21:24:06.0976927Z * [new branch] gh/seemethere/33/orig -> origin/gh/seemethere/33/orig 2025-08-14T21:24:06.0977820Z * [new branch] gh/seemethere/34/base -> origin/gh/seemethere/34/base 2025-08-14T21:24:06.0978384Z * [new branch] gh/seemethere/34/head -> origin/gh/seemethere/34/head 2025-08-14T21:24:06.0979077Z * [new branch] gh/seemethere/34/orig -> origin/gh/seemethere/34/orig 2025-08-14T21:24:06.0983797Z * [new branch] gh/seemethere/35/base -> origin/gh/seemethere/35/base 2025-08-14T21:24:06.0984003Z * [new branch] gh/seemethere/35/head -> origin/gh/seemethere/35/head 2025-08-14T21:24:06.0984164Z * [new branch] gh/seemethere/35/orig -> origin/gh/seemethere/35/orig 2025-08-14T21:24:06.0984320Z * [new branch] gh/seemethere/37/base -> origin/gh/seemethere/37/base 2025-08-14T21:24:06.0984483Z * [new branch] gh/seemethere/37/head -> origin/gh/seemethere/37/head 2025-08-14T21:24:06.0984638Z * [new branch] gh/seemethere/37/orig -> origin/gh/seemethere/37/orig 2025-08-14T21:24:06.0989017Z * [new branch] gh/seemethere/39/base -> origin/gh/seemethere/39/base 2025-08-14T21:24:06.0989393Z * [new branch] gh/seemethere/39/head -> origin/gh/seemethere/39/head 2025-08-14T21:24:06.0989570Z * [new branch] gh/seemethere/39/orig -> origin/gh/seemethere/39/orig 2025-08-14T21:24:06.0989727Z * [new branch] gh/seemethere/40/base -> origin/gh/seemethere/40/base 2025-08-14T21:24:06.0989870Z * [new branch] gh/seemethere/40/head -> origin/gh/seemethere/40/head 2025-08-14T21:24:06.0990013Z * [new branch] gh/seemethere/40/orig -> origin/gh/seemethere/40/orig 2025-08-14T21:24:06.0995791Z * [new branch] gh/seemethere/41/base -> origin/gh/seemethere/41/base 2025-08-14T21:24:06.0995990Z * [new branch] gh/seemethere/41/head -> origin/gh/seemethere/41/head 2025-08-14T21:24:06.1002578Z * [new branch] gh/seemethere/41/orig -> origin/gh/seemethere/41/orig 2025-08-14T21:24:06.1004723Z * [new branch] gh/seemethere/42/base -> origin/gh/seemethere/42/base 2025-08-14T21:24:06.1005073Z * [new branch] gh/seemethere/42/head -> origin/gh/seemethere/42/head 2025-08-14T21:24:06.1005266Z * [new branch] gh/seemethere/42/orig -> origin/gh/seemethere/42/orig 2025-08-14T21:24:06.1005437Z * [new branch] gh/seemethere/43/base -> origin/gh/seemethere/43/base 2025-08-14T21:24:06.1005665Z * [new branch] gh/seemethere/43/head -> origin/gh/seemethere/43/head 2025-08-14T21:24:06.1005953Z * [new branch] gh/seemethere/43/orig -> origin/gh/seemethere/43/orig 2025-08-14T21:24:06.1006110Z * [new branch] gh/seemethere/44/base -> origin/gh/seemethere/44/base 2025-08-14T21:24:06.1006256Z * [new branch] gh/seemethere/44/head -> origin/gh/seemethere/44/head 2025-08-14T21:24:06.1006411Z * [new branch] gh/seemethere/44/orig -> origin/gh/seemethere/44/orig 2025-08-14T21:24:06.1006711Z * [new branch] gh/seemethere/45/base -> origin/gh/seemethere/45/base 2025-08-14T21:24:06.1006867Z * [new branch] gh/seemethere/45/head -> origin/gh/seemethere/45/head 2025-08-14T21:24:06.1007018Z * [new branch] gh/seemethere/45/orig -> origin/gh/seemethere/45/orig 2025-08-14T21:24:06.1007159Z * [new branch] gh/seemethere/46/base -> origin/gh/seemethere/46/base 2025-08-14T21:24:06.1007308Z * [new branch] gh/seemethere/46/head -> origin/gh/seemethere/46/head 2025-08-14T21:24:06.1007452Z * [new branch] gh/seemethere/46/orig -> origin/gh/seemethere/46/orig 2025-08-14T21:24:06.1007604Z * [new branch] gh/seemethere/47/base -> origin/gh/seemethere/47/base 2025-08-14T21:24:06.1007755Z * [new branch] gh/seemethere/47/head -> origin/gh/seemethere/47/head 2025-08-14T21:24:06.1007896Z * [new branch] gh/seemethere/47/orig -> origin/gh/seemethere/47/orig 2025-08-14T21:24:06.1008153Z * [new branch] gh/seemethere/48/base -> origin/gh/seemethere/48/base 2025-08-14T21:24:06.1008298Z * [new branch] gh/seemethere/48/head -> origin/gh/seemethere/48/head 2025-08-14T21:24:06.1008439Z * [new branch] gh/seemethere/48/orig -> origin/gh/seemethere/48/orig 2025-08-14T21:24:06.1008588Z * [new branch] gh/seemethere/49/base -> origin/gh/seemethere/49/base 2025-08-14T21:24:06.1008731Z * [new branch] gh/seemethere/49/head -> origin/gh/seemethere/49/head 2025-08-14T21:24:06.1008875Z * [new branch] gh/seemethere/49/orig -> origin/gh/seemethere/49/orig 2025-08-14T21:24:06.1014338Z * [new branch] gh/seemethere/50/base -> origin/gh/seemethere/50/base 2025-08-14T21:24:06.1016488Z * [new branch] gh/seemethere/50/head -> origin/gh/seemethere/50/head 2025-08-14T21:24:06.1016833Z * [new branch] gh/seemethere/50/orig -> origin/gh/seemethere/50/orig 2025-08-14T21:24:06.1017029Z * [new branch] gh/seemethere/51/base -> origin/gh/seemethere/51/base 2025-08-14T21:24:06.1017178Z * [new branch] gh/seemethere/51/head -> origin/gh/seemethere/51/head 2025-08-14T21:24:06.1017448Z * [new branch] gh/seemethere/51/orig -> origin/gh/seemethere/51/orig 2025-08-14T21:24:06.1017616Z * [new branch] gh/seemethere/52/base -> origin/gh/seemethere/52/base 2025-08-14T21:24:06.1017760Z * [new branch] gh/seemethere/52/head -> origin/gh/seemethere/52/head 2025-08-14T21:24:06.1017910Z * [new branch] gh/seemethere/52/orig -> origin/gh/seemethere/52/orig 2025-08-14T21:24:06.1018052Z * [new branch] gh/seemethere/53/base -> origin/gh/seemethere/53/base 2025-08-14T21:24:06.1018313Z * [new branch] gh/seemethere/53/head -> origin/gh/seemethere/53/head 2025-08-14T21:24:06.1018481Z * [new branch] gh/seemethere/53/orig -> origin/gh/seemethere/53/orig 2025-08-14T21:24:06.1018624Z * [new branch] gh/seemethere/54/base -> origin/gh/seemethere/54/base 2025-08-14T21:24:06.1018776Z * [new branch] gh/seemethere/54/head -> origin/gh/seemethere/54/head 2025-08-14T21:24:06.1018921Z * [new branch] gh/seemethere/54/orig -> origin/gh/seemethere/54/orig 2025-08-14T21:24:06.1019063Z * [new branch] gh/seemethere/55/base -> origin/gh/seemethere/55/base 2025-08-14T21:24:06.1019227Z * [new branch] gh/seemethere/55/head -> origin/gh/seemethere/55/head 2025-08-14T21:24:06.1019888Z * [new branch] gh/seemethere/55/orig -> origin/gh/seemethere/55/orig 2025-08-14T21:24:06.1022836Z * [new branch] gh/seemethere/56/base -> origin/gh/seemethere/56/base 2025-08-14T21:24:06.1023191Z * [new branch] gh/seemethere/56/head -> origin/gh/seemethere/56/head 2025-08-14T21:24:06.1023604Z * [new branch] gh/seemethere/56/orig -> origin/gh/seemethere/56/orig 2025-08-14T21:24:06.1023797Z * [new branch] gh/seemethere/57/base -> origin/gh/seemethere/57/base 2025-08-14T21:24:06.1024129Z * [new branch] gh/seemethere/57/head -> origin/gh/seemethere/57/head 2025-08-14T21:24:06.1025641Z * [new branch] gh/seemethere/57/orig -> origin/gh/seemethere/57/orig 2025-08-14T21:24:06.1030926Z * [new branch] gh/seemethere/58/base -> origin/gh/seemethere/58/base 2025-08-14T21:24:06.1031121Z * [new branch] gh/seemethere/58/head -> origin/gh/seemethere/58/head 2025-08-14T21:24:06.1031283Z * [new branch] gh/seemethere/58/orig -> origin/gh/seemethere/58/orig 2025-08-14T21:24:06.1031434Z * [new branch] gh/seemethere/59/base -> origin/gh/seemethere/59/base 2025-08-14T21:24:06.1031747Z * [new branch] gh/seemethere/59/head -> origin/gh/seemethere/59/head 2025-08-14T21:24:06.1031912Z * [new branch] gh/seemethere/59/orig -> origin/gh/seemethere/59/orig 2025-08-14T21:24:06.1032122Z * [new branch] gh/seemethere/7/head -> origin/gh/seemethere/7/head 2025-08-14T21:24:06.1032855Z * [new branch] gh/shunting314/145/base -> origin/gh/shunting314/145/base 2025-08-14T21:24:06.1033037Z * [new branch] gh/shunting314/145/head -> origin/gh/shunting314/145/head 2025-08-14T21:24:06.1033201Z * [new branch] gh/shunting314/145/orig -> origin/gh/shunting314/145/orig 2025-08-14T21:24:06.1036397Z * [new branch] gh/shunting314/176/base -> origin/gh/shunting314/176/base 2025-08-14T21:24:06.1036653Z * [new branch] gh/shunting314/176/head -> origin/gh/shunting314/176/head 2025-08-14T21:24:06.1036843Z * [new branch] gh/shunting314/176/orig -> origin/gh/shunting314/176/orig 2025-08-14T21:24:06.1037016Z * [new branch] gh/shunting314/211/base -> origin/gh/shunting314/211/base 2025-08-14T21:24:06.1037304Z * [new branch] gh/shunting314/211/head -> origin/gh/shunting314/211/head 2025-08-14T21:24:06.1037470Z * [new branch] gh/shunting314/211/orig -> origin/gh/shunting314/211/orig 2025-08-14T21:24:06.1041929Z * [new branch] gh/shunting314/212/base -> origin/gh/shunting314/212/base 2025-08-14T21:24:06.1042159Z * [new branch] gh/shunting314/212/head -> origin/gh/shunting314/212/head 2025-08-14T21:24:06.1042334Z * [new branch] gh/shunting314/212/orig -> origin/gh/shunting314/212/orig 2025-08-14T21:24:06.1042489Z * [new branch] gh/shunting314/213/base -> origin/gh/shunting314/213/base 2025-08-14T21:24:06.1042649Z * [new branch] gh/shunting314/213/head -> origin/gh/shunting314/213/head 2025-08-14T21:24:06.1042836Z * [new branch] gh/shunting314/213/orig -> origin/gh/shunting314/213/orig 2025-08-14T21:24:06.1045980Z * [new branch] gh/silverguo/1/base -> origin/gh/silverguo/1/base 2025-08-14T21:24:06.1046318Z * [new branch] gh/silverguo/1/head -> origin/gh/silverguo/1/head 2025-08-14T21:24:06.1046497Z * [new branch] gh/silverguo/2/base -> origin/gh/silverguo/2/base 2025-08-14T21:24:06.1046729Z * [new branch] gh/silverguo/2/head -> origin/gh/silverguo/2/head 2025-08-14T21:24:06.1046916Z * [new branch] gh/silverguo/3/base -> origin/gh/silverguo/3/base 2025-08-14T21:24:06.1047653Z * [new branch] gh/silverguo/3/head -> origin/gh/silverguo/3/head 2025-08-14T21:24:06.1047999Z * [new branch] gh/silverguo/4/base -> origin/gh/silverguo/4/base 2025-08-14T21:24:06.1048169Z * [new branch] gh/silverguo/4/head -> origin/gh/silverguo/4/head 2025-08-14T21:24:06.1049706Z * [new branch] gh/sinhaanhsul/1/base -> origin/gh/sinhaanhsul/1/base 2025-08-14T21:24:06.1050335Z * [new branch] gh/sinhaanhsul/1/head -> origin/gh/sinhaanhsul/1/head 2025-08-14T21:24:06.1052586Z * [new branch] gh/skarjala/11/base -> origin/gh/skarjala/11/base 2025-08-14T21:24:06.1052912Z * [new branch] gh/skarjala/11/head -> origin/gh/skarjala/11/head 2025-08-14T21:24:06.1053089Z * [new branch] gh/skarjala/11/orig -> origin/gh/skarjala/11/orig 2025-08-14T21:24:06.1053345Z * [new branch] gh/skarjala/13/base -> origin/gh/skarjala/13/base 2025-08-14T21:24:06.1057838Z * [new branch] gh/skarjala/13/head -> origin/gh/skarjala/13/head 2025-08-14T21:24:06.1058032Z * [new branch] gh/skarjala/13/orig -> origin/gh/skarjala/13/orig 2025-08-14T21:24:06.1058197Z * [new branch] gh/skarjala/14/base -> origin/gh/skarjala/14/base 2025-08-14T21:24:06.1058573Z * [new branch] gh/skarjala/14/head -> origin/gh/skarjala/14/head 2025-08-14T21:24:06.1058735Z * [new branch] gh/skarjala/14/orig -> origin/gh/skarjala/14/orig 2025-08-14T21:24:06.1058885Z * [new branch] gh/skarjala/15/base -> origin/gh/skarjala/15/base 2025-08-14T21:24:06.1059036Z * [new branch] gh/skarjala/15/head -> origin/gh/skarjala/15/head 2025-08-14T21:24:06.1059229Z * [new branch] gh/skarjala/15/orig -> origin/gh/skarjala/15/orig 2025-08-14T21:24:06.1063877Z * [new branch] gh/skarjala/16/base -> origin/gh/skarjala/16/base 2025-08-14T21:24:06.1067563Z * [new branch] gh/skarjala/16/head -> origin/gh/skarjala/16/head 2025-08-14T21:24:06.1070399Z * [new branch] gh/skarjala/16/orig -> origin/gh/skarjala/16/orig 2025-08-14T21:24:06.1070673Z * [new branch] gh/skarjala/17/base -> origin/gh/skarjala/17/base 2025-08-14T21:24:06.1075395Z * [new branch] gh/skarjala/17/head -> origin/gh/skarjala/17/head 2025-08-14T21:24:06.1077719Z * [new branch] gh/skarjala/17/orig -> origin/gh/skarjala/17/orig 2025-08-14T21:24:06.1077918Z * [new branch] gh/skarjala/18/base -> origin/gh/skarjala/18/base 2025-08-14T21:24:06.1078062Z * [new branch] gh/skarjala/18/head -> origin/gh/skarjala/18/head 2025-08-14T21:24:06.1078203Z * [new branch] gh/skarjala/18/orig -> origin/gh/skarjala/18/orig 2025-08-14T21:24:06.1078351Z * [new branch] gh/skarjala/19/base -> origin/gh/skarjala/19/base 2025-08-14T21:24:06.1078490Z * [new branch] gh/skarjala/19/head -> origin/gh/skarjala/19/head 2025-08-14T21:24:06.1078640Z * [new branch] gh/skarjala/19/orig -> origin/gh/skarjala/19/orig 2025-08-14T21:24:06.1078800Z * [new branch] gh/soulitzer/269/base -> origin/gh/soulitzer/269/base 2025-08-14T21:24:06.1078989Z * [new branch] gh/soulitzer/269/head -> origin/gh/soulitzer/269/head 2025-08-14T21:24:06.1079143Z * [new branch] gh/soulitzer/269/orig -> origin/gh/soulitzer/269/orig 2025-08-14T21:24:06.1079294Z * [new branch] gh/soulitzer/276/base -> origin/gh/soulitzer/276/base 2025-08-14T21:24:06.1079447Z * [new branch] gh/soulitzer/276/head -> origin/gh/soulitzer/276/head 2025-08-14T21:24:06.1079595Z * [new branch] gh/soulitzer/276/orig -> origin/gh/soulitzer/276/orig 2025-08-14T21:24:06.1079741Z * [new branch] gh/soulitzer/287/base -> origin/gh/soulitzer/287/base 2025-08-14T21:24:06.1079888Z * [new branch] gh/soulitzer/287/head -> origin/gh/soulitzer/287/head 2025-08-14T21:24:06.1080036Z * [new branch] gh/soulitzer/287/orig -> origin/gh/soulitzer/287/orig 2025-08-14T21:24:06.1080187Z * [new branch] gh/soulitzer/296/base -> origin/gh/soulitzer/296/base 2025-08-14T21:24:06.1080487Z * [new branch] gh/soulitzer/296/head -> origin/gh/soulitzer/296/head 2025-08-14T21:24:06.1080643Z * [new branch] gh/soulitzer/296/orig -> origin/gh/soulitzer/296/orig 2025-08-14T21:24:06.1085392Z * [new branch] gh/soulitzer/299/base -> origin/gh/soulitzer/299/base 2025-08-14T21:24:06.1087462Z * [new branch] gh/soulitzer/299/head -> origin/gh/soulitzer/299/head 2025-08-14T21:24:06.1087748Z * [new branch] gh/soulitzer/299/orig -> origin/gh/soulitzer/299/orig 2025-08-14T21:24:06.1093471Z * [new branch] gh/soulitzer/300/base -> origin/gh/soulitzer/300/base 2025-08-14T21:24:06.1098403Z * [new branch] gh/soulitzer/300/head -> origin/gh/soulitzer/300/head 2025-08-14T21:24:06.1102852Z * [new branch] gh/soulitzer/300/orig -> origin/gh/soulitzer/300/orig 2025-08-14T21:24:06.1105176Z * [new branch] gh/soulitzer/301/base -> origin/gh/soulitzer/301/base 2025-08-14T21:24:06.1105463Z * [new branch] gh/soulitzer/301/head -> origin/gh/soulitzer/301/head 2025-08-14T21:24:06.1111536Z * [new branch] gh/soulitzer/301/orig -> origin/gh/soulitzer/301/orig 2025-08-14T21:24:06.1113575Z * [new branch] gh/soulitzer/313/base -> origin/gh/soulitzer/313/base 2025-08-14T21:24:06.1113927Z * [new branch] gh/soulitzer/313/head -> origin/gh/soulitzer/313/head 2025-08-14T21:24:06.1114104Z * [new branch] gh/soulitzer/313/orig -> origin/gh/soulitzer/313/orig 2025-08-14T21:24:06.1114260Z * [new branch] gh/soulitzer/319/base -> origin/gh/soulitzer/319/base 2025-08-14T21:24:06.1114407Z * [new branch] gh/soulitzer/319/head -> origin/gh/soulitzer/319/head 2025-08-14T21:24:06.1114554Z * [new branch] gh/soulitzer/319/orig -> origin/gh/soulitzer/319/orig 2025-08-14T21:24:06.1114805Z * [new branch] gh/soulitzer/320/base -> origin/gh/soulitzer/320/base 2025-08-14T21:24:06.1119525Z * [new branch] gh/soulitzer/320/head -> origin/gh/soulitzer/320/head 2025-08-14T21:24:06.1124640Z * [new branch] gh/soulitzer/320/orig -> origin/gh/soulitzer/320/orig 2025-08-14T21:24:06.1126631Z * [new branch] gh/soulitzer/336/base -> origin/gh/soulitzer/336/base 2025-08-14T21:24:06.1126915Z * [new branch] gh/soulitzer/336/head -> origin/gh/soulitzer/336/head 2025-08-14T21:24:06.1130373Z * [new branch] gh/soulitzer/336/orig -> origin/gh/soulitzer/336/orig 2025-08-14T21:24:06.1130659Z * [new branch] gh/soulitzer/347/base -> origin/gh/soulitzer/347/base 2025-08-14T21:24:06.1134064Z * [new branch] gh/soulitzer/347/head -> origin/gh/soulitzer/347/head 2025-08-14T21:24:06.1134409Z * [new branch] gh/soulitzer/347/orig -> origin/gh/soulitzer/347/orig 2025-08-14T21:24:06.1134604Z * [new branch] gh/soulitzer/349/base -> origin/gh/soulitzer/349/base 2025-08-14T21:24:06.1134752Z * [new branch] gh/soulitzer/349/head -> origin/gh/soulitzer/349/head 2025-08-14T21:24:06.1134908Z * [new branch] gh/soulitzer/349/orig -> origin/gh/soulitzer/349/orig 2025-08-14T21:24:06.1135055Z * [new branch] gh/soulitzer/350/base -> origin/gh/soulitzer/350/base 2025-08-14T21:24:06.1135200Z * [new branch] gh/soulitzer/350/head -> origin/gh/soulitzer/350/head 2025-08-14T21:24:06.1135358Z * [new branch] gh/soulitzer/350/orig -> origin/gh/soulitzer/350/orig 2025-08-14T21:24:06.1135500Z * [new branch] gh/soulitzer/351/base -> origin/gh/soulitzer/351/base 2025-08-14T21:24:06.1135655Z * [new branch] gh/soulitzer/351/head -> origin/gh/soulitzer/351/head 2025-08-14T21:24:06.1135943Z * [new branch] gh/soulitzer/351/orig -> origin/gh/soulitzer/351/orig 2025-08-14T21:24:06.1136091Z * [new branch] gh/soulitzer/353/base -> origin/gh/soulitzer/353/base 2025-08-14T21:24:06.1136246Z * [new branch] gh/soulitzer/353/head -> origin/gh/soulitzer/353/head 2025-08-14T21:24:06.1136413Z * [new branch] gh/soulitzer/353/orig -> origin/gh/soulitzer/353/orig 2025-08-14T21:24:06.1136563Z * [new branch] gh/soulitzer/358/base -> origin/gh/soulitzer/358/base 2025-08-14T21:24:06.1136706Z * [new branch] gh/soulitzer/358/head -> origin/gh/soulitzer/358/head 2025-08-14T21:24:06.1136849Z * [new branch] gh/soulitzer/358/orig -> origin/gh/soulitzer/358/orig 2025-08-14T21:24:06.1137001Z * [new branch] gh/soulitzer/359/base -> origin/gh/soulitzer/359/base 2025-08-14T21:24:06.1137140Z * [new branch] gh/soulitzer/359/head -> origin/gh/soulitzer/359/head 2025-08-14T21:24:06.1137341Z * [new branch] gh/soulitzer/359/orig -> origin/gh/soulitzer/359/orig 2025-08-14T21:24:06.1137482Z * [new branch] gh/soulitzer/362/base -> origin/gh/soulitzer/362/base 2025-08-14T21:24:06.1137623Z * [new branch] gh/soulitzer/362/head -> origin/gh/soulitzer/362/head 2025-08-14T21:24:06.1137773Z * [new branch] gh/soulitzer/362/orig -> origin/gh/soulitzer/362/orig 2025-08-14T21:24:06.1137916Z * [new branch] gh/soulitzer/372/base -> origin/gh/soulitzer/372/base 2025-08-14T21:24:06.1138062Z * [new branch] gh/soulitzer/372/head -> origin/gh/soulitzer/372/head 2025-08-14T21:24:06.1138204Z * [new branch] gh/soulitzer/372/orig -> origin/gh/soulitzer/372/orig 2025-08-14T21:24:06.1138351Z * [new branch] gh/swolchok/728/next -> origin/gh/swolchok/728/next 2025-08-14T21:24:06.1138503Z * [new branch] gh/swolchok/758/base -> origin/gh/swolchok/758/base 2025-08-14T21:24:06.1138648Z * [new branch] gh/swolchok/758/head -> origin/gh/swolchok/758/head 2025-08-14T21:24:06.1138792Z * [new branch] gh/swolchok/758/orig -> origin/gh/swolchok/758/orig 2025-08-14T21:24:06.1138932Z * [new branch] gh/swolchok/767/base -> origin/gh/swolchok/767/base 2025-08-14T21:24:06.1139069Z * [new branch] gh/swolchok/767/head -> origin/gh/swolchok/767/head 2025-08-14T21:24:06.1139213Z * [new branch] gh/swolchok/767/orig -> origin/gh/swolchok/767/orig 2025-08-14T21:24:06.1139350Z * [new branch] gh/swolchok/768/base -> origin/gh/swolchok/768/base 2025-08-14T21:24:06.1139492Z * [new branch] gh/swolchok/768/head -> origin/gh/swolchok/768/head 2025-08-14T21:24:06.1139641Z * [new branch] gh/swolchok/768/orig -> origin/gh/swolchok/768/orig 2025-08-14T21:24:06.1140002Z * [new branch] gh/swolchok/769/base -> origin/gh/swolchok/769/base 2025-08-14T21:24:06.1140166Z * [new branch] gh/swolchok/769/head -> origin/gh/swolchok/769/head 2025-08-14T21:24:06.1140307Z * [new branch] gh/swolchok/769/orig -> origin/gh/swolchok/769/orig 2025-08-14T21:24:06.1140448Z * [new branch] gh/swolchok/771/base -> origin/gh/swolchok/771/base 2025-08-14T21:24:06.1140599Z * [new branch] gh/swolchok/771/head -> origin/gh/swolchok/771/head 2025-08-14T21:24:06.1140742Z * [new branch] gh/swolchok/771/orig -> origin/gh/swolchok/771/orig 2025-08-14T21:24:06.1140900Z * [new branch] gh/swolchok/772/base -> origin/gh/swolchok/772/base 2025-08-14T21:24:06.1141042Z * [new branch] gh/swolchok/772/head -> origin/gh/swolchok/772/head 2025-08-14T21:24:06.1141180Z * [new branch] gh/swolchok/772/orig -> origin/gh/swolchok/772/orig 2025-08-14T21:24:06.1141378Z * [new branch] gh/swolchok/773/base -> origin/gh/swolchok/773/base 2025-08-14T21:24:06.1141516Z * [new branch] gh/swolchok/773/head -> origin/gh/swolchok/773/head 2025-08-14T21:24:06.1141661Z * [new branch] gh/swolchok/773/orig -> origin/gh/swolchok/773/orig 2025-08-14T21:24:06.1142052Z * [new branch] gh/swolchok/786/base -> origin/gh/swolchok/786/base 2025-08-14T21:24:06.1142195Z * [new branch] gh/swolchok/786/head -> origin/gh/swolchok/786/head 2025-08-14T21:24:06.1142339Z * [new branch] gh/swolchok/786/orig -> origin/gh/swolchok/786/orig 2025-08-14T21:24:06.1142477Z * [new branch] gh/swolchok/787/base -> origin/gh/swolchok/787/base 2025-08-14T21:24:06.1142618Z * [new branch] gh/swolchok/787/head -> origin/gh/swolchok/787/head 2025-08-14T21:24:06.1142757Z * [new branch] gh/swolchok/787/orig -> origin/gh/swolchok/787/orig 2025-08-14T21:24:06.1143037Z * [new branch] gh/syed-ahmed/2/base -> origin/gh/syed-ahmed/2/base 2025-08-14T21:24:06.1143190Z * [new branch] gh/syed-ahmed/2/head -> origin/gh/syed-ahmed/2/head 2025-08-14T21:24:06.1143326Z * [new branch] gh/syed-ahmed/2/orig -> origin/gh/syed-ahmed/2/orig 2025-08-14T21:24:06.1145007Z * [new branch] gh/syed-ahmed/3/base -> origin/gh/syed-ahmed/3/base 2025-08-14T21:24:06.1145506Z * [new branch] gh/syed-ahmed/3/head -> origin/gh/syed-ahmed/3/head 2025-08-14T21:24:06.1145689Z * [new branch] gh/syed-ahmed/3/orig -> origin/gh/syed-ahmed/3/orig 2025-08-14T21:24:06.1145839Z * [new branch] gh/syed-ahmed/4/base -> origin/gh/syed-ahmed/4/base 2025-08-14T21:24:06.1145995Z * [new branch] gh/syed-ahmed/4/head -> origin/gh/syed-ahmed/4/head 2025-08-14T21:24:06.1146159Z * [new branch] gh/syed-ahmed/4/orig -> origin/gh/syed-ahmed/4/orig 2025-08-14T21:24:06.1148300Z * [new branch] gh/teja-rao/3/base -> origin/gh/teja-rao/3/base 2025-08-14T21:24:06.1148486Z * [new branch] gh/teja-rao/3/head -> origin/gh/teja-rao/3/head 2025-08-14T21:24:06.1148652Z * [new branch] gh/teja-rao/3/orig -> origin/gh/teja-rao/3/orig 2025-08-14T21:24:06.1148876Z * [new branch] gh/tianyu-l/2/base -> origin/gh/tianyu-l/2/base 2025-08-14T21:24:06.1153426Z * [new branch] gh/tianyu-l/2/head -> origin/gh/tianyu-l/2/head 2025-08-14T21:24:06.1153617Z * [new branch] gh/tianyu-l/2/orig -> origin/gh/tianyu-l/2/orig 2025-08-14T21:24:06.1153799Z * [new branch] gh/titaiwangms/1/base -> origin/gh/titaiwangms/1/base 2025-08-14T21:24:06.1153960Z * [new branch] gh/titaiwangms/1/head -> origin/gh/titaiwangms/1/head 2025-08-14T21:24:06.1154139Z * [new branch] gh/titaiwangms/1/orig -> origin/gh/titaiwangms/1/orig 2025-08-14T21:24:06.1158400Z * [new branch] gh/titaiwangms/2/base -> origin/gh/titaiwangms/2/base 2025-08-14T21:24:06.1158754Z * [new branch] gh/titaiwangms/2/head -> origin/gh/titaiwangms/2/head 2025-08-14T21:24:06.1158972Z * [new branch] gh/titaiwangms/2/orig -> origin/gh/titaiwangms/2/orig 2025-08-14T21:24:06.1159204Z * [new branch] gh/titaiwangms/3/base -> origin/gh/titaiwangms/3/base 2025-08-14T21:24:06.1159383Z * [new branch] gh/titaiwangms/3/head -> origin/gh/titaiwangms/3/head 2025-08-14T21:24:06.1159535Z * [new branch] gh/titaiwangms/3/orig -> origin/gh/titaiwangms/3/orig 2025-08-14T21:24:06.1159810Z * [new branch] gh/titaiwangms/4/base -> origin/gh/titaiwangms/4/base 2025-08-14T21:24:06.1160365Z * [new branch] gh/titaiwangms/4/head -> origin/gh/titaiwangms/4/head 2025-08-14T21:24:06.1160793Z * [new branch] gh/titaiwangms/4/orig -> origin/gh/titaiwangms/4/orig 2025-08-14T21:24:06.1160954Z * [new branch] gh/titaiwangms/5/base -> origin/gh/titaiwangms/5/base 2025-08-14T21:24:06.1161115Z * [new branch] gh/titaiwangms/5/head -> origin/gh/titaiwangms/5/head 2025-08-14T21:24:06.1165730Z * [new branch] gh/titaiwangms/5/orig -> origin/gh/titaiwangms/5/orig 2025-08-14T21:24:06.1166076Z * [new branch] gh/titaiwangms/6/base -> origin/gh/titaiwangms/6/base 2025-08-14T21:24:06.1166282Z * [new branch] gh/titaiwangms/6/head -> origin/gh/titaiwangms/6/head 2025-08-14T21:24:06.1166459Z * [new branch] gh/titaiwangms/6/orig -> origin/gh/titaiwangms/6/orig 2025-08-14T21:24:06.1166631Z * [new branch] gh/titaiwangms/7/base -> origin/gh/titaiwangms/7/base 2025-08-14T21:24:06.1166783Z * [new branch] gh/titaiwangms/7/head -> origin/gh/titaiwangms/7/head 2025-08-14T21:24:06.1168007Z * [new branch] gh/titaiwangms/7/orig -> origin/gh/titaiwangms/7/orig 2025-08-14T21:24:06.1168191Z * [new branch] gh/titaiwangms/8/base -> origin/gh/titaiwangms/8/base 2025-08-14T21:24:06.1168351Z * [new branch] gh/titaiwangms/8/head -> origin/gh/titaiwangms/8/head 2025-08-14T21:24:06.1168507Z * [new branch] gh/titaiwangms/8/orig -> origin/gh/titaiwangms/8/orig 2025-08-14T21:24:06.1174050Z * [new branch] gh/tugsbayasgalan/1/base -> origin/gh/tugsbayasgalan/1/base 2025-08-14T21:24:06.1174259Z * [new branch] gh/tugsbayasgalan/1/head -> origin/gh/tugsbayasgalan/1/head 2025-08-14T21:24:06.1174437Z * [new branch] gh/tugsbayasgalan/1/orig -> origin/gh/tugsbayasgalan/1/orig 2025-08-14T21:24:06.1174592Z * [new branch] gh/v0i0/1/base -> origin/gh/v0i0/1/base 2025-08-14T21:24:06.1174736Z * [new branch] gh/v0i0/1/head -> origin/gh/v0i0/1/head 2025-08-14T21:24:06.1174876Z * [new branch] gh/v0i0/1/orig -> origin/gh/v0i0/1/orig 2025-08-14T21:24:06.1175000Z * [new branch] gh/v0i0/2/base -> origin/gh/v0i0/2/base 2025-08-14T21:24:06.1175125Z * [new branch] gh/v0i0/2/head -> origin/gh/v0i0/2/head 2025-08-14T21:24:06.1175261Z * [new branch] gh/v0i0/2/orig -> origin/gh/v0i0/2/orig 2025-08-14T21:24:06.1177827Z * [new branch] gh/v0i0/3/base -> origin/gh/v0i0/3/base 2025-08-14T21:24:06.1177977Z * [new branch] gh/v0i0/3/head -> origin/gh/v0i0/3/head 2025-08-14T21:24:06.1178407Z * [new branch] gh/v0i0/3/orig -> origin/gh/v0i0/3/orig 2025-08-14T21:24:06.1178576Z * [new branch] gh/v0i0/4/base -> origin/gh/v0i0/4/base 2025-08-14T21:24:06.1178737Z * [new branch] gh/v0i0/4/head -> origin/gh/v0i0/4/head 2025-08-14T21:24:06.1178935Z * [new branch] gh/v0i0/4/orig -> origin/gh/v0i0/4/orig 2025-08-14T21:24:06.1185326Z * [new branch] gh/v0i0/5/base -> origin/gh/v0i0/5/base 2025-08-14T21:24:06.1187396Z * [new branch] gh/v0i0/5/head -> origin/gh/v0i0/5/head 2025-08-14T21:24:06.1187669Z * [new branch] gh/v0i0/5/orig -> origin/gh/v0i0/5/orig 2025-08-14T21:24:06.1192773Z * [new branch] gh/v0i0/6/base -> origin/gh/v0i0/6/base 2025-08-14T21:24:06.1198548Z * [new branch] gh/v0i0/6/head -> origin/gh/v0i0/6/head 2025-08-14T21:24:06.1200695Z * [new branch] gh/v0i0/6/orig -> origin/gh/v0i0/6/orig 2025-08-14T21:24:06.1200975Z * [new branch] gh/vkuzo/1/next -> origin/gh/vkuzo/1/next 2025-08-14T21:24:06.1206474Z * [new branch] gh/vkuzo/2/next -> origin/gh/vkuzo/2/next 2025-08-14T21:24:06.1211916Z * [new branch] gh/vkuzo/3/next -> origin/gh/vkuzo/3/next 2025-08-14T21:24:06.1212141Z * [new branch] gh/wconstab/392/base -> origin/gh/wconstab/392/base 2025-08-14T21:24:06.1212302Z * [new branch] gh/wconstab/392/head -> origin/gh/wconstab/392/head 2025-08-14T21:24:06.1212445Z * [new branch] gh/wconstab/392/orig -> origin/gh/wconstab/392/orig 2025-08-14T21:24:06.1212583Z * [new branch] gh/wconstab/419/base -> origin/gh/wconstab/419/base 2025-08-14T21:24:06.1212735Z * [new branch] gh/wconstab/419/head -> origin/gh/wconstab/419/head 2025-08-14T21:24:06.1212876Z * [new branch] gh/wconstab/419/orig -> origin/gh/wconstab/419/orig 2025-08-14T21:24:06.1213024Z * [new branch] gh/wconstab/424/base -> origin/gh/wconstab/424/base 2025-08-14T21:24:06.1213161Z * [new branch] gh/wconstab/424/head -> origin/gh/wconstab/424/head 2025-08-14T21:24:06.1213397Z * [new branch] gh/wconstab/424/orig -> origin/gh/wconstab/424/orig 2025-08-14T21:24:06.1213542Z * [new branch] gh/wconstab/425/base -> origin/gh/wconstab/425/base 2025-08-14T21:24:06.1213676Z * [new branch] gh/wconstab/425/head -> origin/gh/wconstab/425/head 2025-08-14T21:24:06.1213812Z * [new branch] gh/wconstab/425/orig -> origin/gh/wconstab/425/orig 2025-08-14T21:24:06.1213956Z * [new branch] gh/wconstab/426/base -> origin/gh/wconstab/426/base 2025-08-14T21:24:06.1214093Z * [new branch] gh/wconstab/426/head -> origin/gh/wconstab/426/head 2025-08-14T21:24:06.1214237Z * [new branch] gh/wconstab/426/orig -> origin/gh/wconstab/426/orig 2025-08-14T21:24:06.1214373Z * [new branch] gh/wconstab/427/base -> origin/gh/wconstab/427/base 2025-08-14T21:24:06.1214511Z * [new branch] gh/wconstab/427/head -> origin/gh/wconstab/427/head 2025-08-14T21:24:06.1214663Z * [new branch] gh/wconstab/427/orig -> origin/gh/wconstab/427/orig 2025-08-14T21:24:06.1214798Z * [new branch] gh/wconstab/428/base -> origin/gh/wconstab/428/base 2025-08-14T21:24:06.1214945Z * [new branch] gh/wconstab/428/head -> origin/gh/wconstab/428/head 2025-08-14T21:24:06.1215080Z * [new branch] gh/wconstab/428/orig -> origin/gh/wconstab/428/orig 2025-08-14T21:24:06.1215217Z * [new branch] gh/wconstab/429/base -> origin/gh/wconstab/429/base 2025-08-14T21:24:06.1215362Z * [new branch] gh/wconstab/429/head -> origin/gh/wconstab/429/head 2025-08-14T21:24:06.1215501Z * [new branch] gh/wconstab/429/orig -> origin/gh/wconstab/429/orig 2025-08-14T21:24:06.1215645Z * [new branch] gh/wconstab/430/base -> origin/gh/wconstab/430/base 2025-08-14T21:24:06.1215790Z * [new branch] gh/wconstab/430/head -> origin/gh/wconstab/430/head 2025-08-14T21:24:06.1215926Z * [new branch] gh/wconstab/430/orig -> origin/gh/wconstab/430/orig 2025-08-14T21:24:06.1216068Z * [new branch] gh/wconstab/431/base -> origin/gh/wconstab/431/base 2025-08-14T21:24:06.1216201Z * [new branch] gh/wconstab/431/head -> origin/gh/wconstab/431/head 2025-08-14T21:24:06.1216334Z * [new branch] gh/wconstab/431/orig -> origin/gh/wconstab/431/orig 2025-08-14T21:24:06.1216477Z * [new branch] gh/wconstab/432/base -> origin/gh/wconstab/432/base 2025-08-14T21:24:06.1216619Z * [new branch] gh/wconstab/432/head -> origin/gh/wconstab/432/head 2025-08-14T21:24:06.1216765Z * [new branch] gh/wconstab/432/orig -> origin/gh/wconstab/432/orig 2025-08-14T21:24:06.1216902Z * [new branch] gh/wconstab/433/base -> origin/gh/wconstab/433/base 2025-08-14T21:24:06.1217090Z * [new branch] gh/wconstab/433/head -> origin/gh/wconstab/433/head 2025-08-14T21:24:06.1217240Z * [new branch] gh/wconstab/433/orig -> origin/gh/wconstab/433/orig 2025-08-14T21:24:06.1217379Z * [new branch] gh/wconstab/434/base -> origin/gh/wconstab/434/base 2025-08-14T21:24:06.1217525Z * [new branch] gh/wconstab/434/head -> origin/gh/wconstab/434/head 2025-08-14T21:24:06.1217665Z * [new branch] gh/wconstab/434/orig -> origin/gh/wconstab/434/orig 2025-08-14T21:24:06.1217994Z * [new branch] gh/wconstab/435/base -> origin/gh/wconstab/435/base 2025-08-14T21:24:06.1218709Z * [new branch] gh/wconstab/435/head -> origin/gh/wconstab/435/head 2025-08-14T21:24:06.1219448Z * [new branch] gh/wconstab/435/orig -> origin/gh/wconstab/435/orig 2025-08-14T21:24:06.1226253Z * [new branch] gh/wconstab/436/base -> origin/gh/wconstab/436/base 2025-08-14T21:24:06.1228278Z * [new branch] gh/wconstab/436/head -> origin/gh/wconstab/436/head 2025-08-14T21:24:06.1228545Z * [new branch] gh/wconstab/436/orig -> origin/gh/wconstab/436/orig 2025-08-14T21:24:06.1233405Z * [new branch] gh/wconstab/437/base -> origin/gh/wconstab/437/base 2025-08-14T21:24:06.1239672Z * [new branch] gh/wconstab/437/head -> origin/gh/wconstab/437/head 2025-08-14T21:24:06.1244859Z * [new branch] gh/wconstab/437/orig -> origin/gh/wconstab/437/orig 2025-08-14T21:24:06.1248956Z * [new branch] gh/wconstab/438/base -> origin/gh/wconstab/438/base 2025-08-14T21:24:06.1253315Z * [new branch] gh/wconstab/438/head -> origin/gh/wconstab/438/head 2025-08-14T21:24:06.1253499Z * [new branch] gh/wconstab/438/orig -> origin/gh/wconstab/438/orig 2025-08-14T21:24:06.1253705Z * [new branch] gh/wconstab/439/base -> origin/gh/wconstab/439/base 2025-08-14T21:24:06.1253853Z * [new branch] gh/wconstab/439/head -> origin/gh/wconstab/439/head 2025-08-14T21:24:06.1254004Z * [new branch] gh/wconstab/439/orig -> origin/gh/wconstab/439/orig 2025-08-14T21:24:06.1254144Z * [new branch] gh/wconstab/440/base -> origin/gh/wconstab/440/base 2025-08-14T21:24:06.1254283Z * [new branch] gh/wconstab/440/head -> origin/gh/wconstab/440/head 2025-08-14T21:24:06.1254429Z * [new branch] gh/wconstab/440/orig -> origin/gh/wconstab/440/orig 2025-08-14T21:24:06.1254567Z * [new branch] gh/wconstab/441/base -> origin/gh/wconstab/441/base 2025-08-14T21:24:06.1254717Z * [new branch] gh/wconstab/441/head -> origin/gh/wconstab/441/head 2025-08-14T21:24:06.1254859Z * [new branch] gh/wconstab/441/orig -> origin/gh/wconstab/441/orig 2025-08-14T21:24:06.1255007Z * [new branch] gh/wconstab/442/base -> origin/gh/wconstab/442/base 2025-08-14T21:24:06.1255156Z * [new branch] gh/wconstab/442/head -> origin/gh/wconstab/442/head 2025-08-14T21:24:06.1255297Z * [new branch] gh/wconstab/442/orig -> origin/gh/wconstab/442/orig 2025-08-14T21:24:06.1255470Z * [new branch] gh/weifengpy/27/base -> origin/gh/weifengpy/27/base 2025-08-14T21:24:06.1255613Z * [new branch] gh/weifengpy/27/head -> origin/gh/weifengpy/27/head 2025-08-14T21:24:06.1255755Z * [new branch] gh/weifengpy/27/orig -> origin/gh/weifengpy/27/orig 2025-08-14T21:24:06.1255905Z * [new branch] gh/weifengpy/30/base -> origin/gh/weifengpy/30/base 2025-08-14T21:24:06.1256045Z * [new branch] gh/weifengpy/30/head -> origin/gh/weifengpy/30/head 2025-08-14T21:24:06.1256193Z * [new branch] gh/weifengpy/30/orig -> origin/gh/weifengpy/30/orig 2025-08-14T21:24:06.1256563Z * [new branch] gh/weifengpy/31/base -> origin/gh/weifengpy/31/base 2025-08-14T21:24:06.1256717Z * [new branch] gh/weifengpy/31/head -> origin/gh/weifengpy/31/head 2025-08-14T21:24:06.1256868Z * [new branch] gh/weifengpy/31/orig -> origin/gh/weifengpy/31/orig 2025-08-14T21:24:06.1257006Z * [new branch] gh/weifengpy/32/base -> origin/gh/weifengpy/32/base 2025-08-14T21:24:06.1257154Z * [new branch] gh/weifengpy/32/head -> origin/gh/weifengpy/32/head 2025-08-14T21:24:06.1257292Z * [new branch] gh/weifengpy/32/orig -> origin/gh/weifengpy/32/orig 2025-08-14T21:24:06.1257431Z * [new branch] gh/weifengpy/33/base -> origin/gh/weifengpy/33/base 2025-08-14T21:24:06.1257577Z * [new branch] gh/weifengpy/33/head -> origin/gh/weifengpy/33/head 2025-08-14T21:24:06.1257717Z * [new branch] gh/weifengpy/33/orig -> origin/gh/weifengpy/33/orig 2025-08-14T21:24:06.1257952Z * [new branch] gh/williamwen42/196/base -> origin/gh/williamwen42/196/base 2025-08-14T21:24:06.1258119Z * [new branch] gh/williamwen42/196/head -> origin/gh/williamwen42/196/head 2025-08-14T21:24:06.1258271Z * [new branch] gh/williamwen42/196/orig -> origin/gh/williamwen42/196/orig 2025-08-14T21:24:06.1258426Z * [new branch] gh/williamwen42/209/base -> origin/gh/williamwen42/209/base 2025-08-14T21:24:06.1258572Z * [new branch] gh/williamwen42/209/head -> origin/gh/williamwen42/209/head 2025-08-14T21:24:06.1258718Z * [new branch] gh/williamwen42/209/orig -> origin/gh/williamwen42/209/orig 2025-08-14T21:24:06.1258873Z * [new branch] gh/williamwen42/250/base -> origin/gh/williamwen42/250/base 2025-08-14T21:24:06.1259019Z * [new branch] gh/williamwen42/250/head -> origin/gh/williamwen42/250/head 2025-08-14T21:24:06.1259177Z * [new branch] gh/williamwen42/250/orig -> origin/gh/williamwen42/250/orig 2025-08-14T21:24:06.1259325Z * [new branch] gh/williamwen42/252/base -> origin/gh/williamwen42/252/base 2025-08-14T21:24:06.1259486Z * [new branch] gh/williamwen42/252/head -> origin/gh/williamwen42/252/head 2025-08-14T21:24:06.1259649Z * [new branch] gh/williamwen42/252/orig -> origin/gh/williamwen42/252/orig 2025-08-14T21:24:06.1259985Z * [new branch] gh/williamwen42/256/base -> origin/gh/williamwen42/256/base 2025-08-14T21:24:06.1260151Z * [new branch] gh/williamwen42/256/head -> origin/gh/williamwen42/256/head 2025-08-14T21:24:06.1260300Z * [new branch] gh/williamwen42/256/orig -> origin/gh/williamwen42/256/orig 2025-08-14T21:24:06.1260629Z * [new branch] gh/williamwen42/258/base -> origin/gh/williamwen42/258/base 2025-08-14T21:24:06.1260836Z * [new branch] gh/williamwen42/258/head -> origin/gh/williamwen42/258/head 2025-08-14T21:24:06.1262386Z * [new branch] gh/williamwen42/258/orig -> origin/gh/williamwen42/258/orig 2025-08-14T21:24:06.1262575Z * [new branch] gh/williamwen42/260/base -> origin/gh/williamwen42/260/base 2025-08-14T21:24:06.1266053Z * [new branch] gh/williamwen42/260/head -> origin/gh/williamwen42/260/head 2025-08-14T21:24:06.1266377Z * [new branch] gh/williamwen42/260/orig -> origin/gh/williamwen42/260/orig 2025-08-14T21:24:06.1271421Z * [new branch] gh/williamwen42/261/base -> origin/gh/williamwen42/261/base 2025-08-14T21:24:06.1276172Z * [new branch] gh/williamwen42/261/head -> origin/gh/williamwen42/261/head 2025-08-14T21:24:06.1281769Z * [new branch] gh/williamwen42/261/orig -> origin/gh/williamwen42/261/orig 2025-08-14T21:24:06.1283889Z * [new branch] gh/williamwen42/262/base -> origin/gh/williamwen42/262/base 2025-08-14T21:24:06.1289599Z * [new branch] gh/williamwen42/262/head -> origin/gh/williamwen42/262/head 2025-08-14T21:24:06.1291532Z * [new branch] gh/williamwen42/262/orig -> origin/gh/williamwen42/262/orig 2025-08-14T21:24:06.1291815Z * [new branch] gh/williamwen42/263/base -> origin/gh/williamwen42/263/base 2025-08-14T21:24:06.1295190Z * [new branch] gh/williamwen42/263/head -> origin/gh/williamwen42/263/head 2025-08-14T21:24:06.1295398Z * [new branch] gh/williamwen42/263/orig -> origin/gh/williamwen42/263/orig 2025-08-14T21:24:06.1295549Z * [new branch] gh/williamwen42/264/base -> origin/gh/williamwen42/264/base 2025-08-14T21:24:06.1295708Z * [new branch] gh/williamwen42/264/head -> origin/gh/williamwen42/264/head 2025-08-14T21:24:06.1295862Z * [new branch] gh/williamwen42/264/orig -> origin/gh/williamwen42/264/orig 2025-08-14T21:24:06.1296016Z * [new branch] gh/williamwen42/265/base -> origin/gh/williamwen42/265/base 2025-08-14T21:24:06.1296366Z * [new branch] gh/williamwen42/265/head -> origin/gh/williamwen42/265/head 2025-08-14T21:24:06.1296523Z * [new branch] gh/williamwen42/265/orig -> origin/gh/williamwen42/265/orig 2025-08-14T21:24:06.1296681Z * [new branch] gh/williamwen42/266/base -> origin/gh/williamwen42/266/base 2025-08-14T21:24:06.1296830Z * [new branch] gh/williamwen42/266/head -> origin/gh/williamwen42/266/head 2025-08-14T21:24:06.1296978Z * [new branch] gh/williamwen42/266/orig -> origin/gh/williamwen42/266/orig 2025-08-14T21:24:06.1297136Z * [new branch] gh/williamwen42/267/base -> origin/gh/williamwen42/267/base 2025-08-14T21:24:06.1297293Z * [new branch] gh/williamwen42/267/head -> origin/gh/williamwen42/267/head 2025-08-14T21:24:06.1297448Z * [new branch] gh/williamwen42/267/orig -> origin/gh/williamwen42/267/orig 2025-08-14T21:24:06.1297605Z * [new branch] gh/williamwen42/268/base -> origin/gh/williamwen42/268/base 2025-08-14T21:24:06.1297759Z * [new branch] gh/williamwen42/268/head -> origin/gh/williamwen42/268/head 2025-08-14T21:24:06.1297928Z * [new branch] gh/williamwen42/268/orig -> origin/gh/williamwen42/268/orig 2025-08-14T21:24:06.1298084Z * [new branch] gh/williamwen42/269/base -> origin/gh/williamwen42/269/base 2025-08-14T21:24:06.1298245Z * [new branch] gh/williamwen42/269/head -> origin/gh/williamwen42/269/head 2025-08-14T21:24:06.1298398Z * [new branch] gh/williamwen42/269/orig -> origin/gh/williamwen42/269/orig 2025-08-14T21:24:06.1298551Z * [new branch] gh/williamwen42/270/base -> origin/gh/williamwen42/270/base 2025-08-14T21:24:06.1298706Z * [new branch] gh/williamwen42/270/head -> origin/gh/williamwen42/270/head 2025-08-14T21:24:06.1298857Z * [new branch] gh/williamwen42/270/orig -> origin/gh/williamwen42/270/orig 2025-08-14T21:24:06.1299020Z * [new branch] gh/williamwen42/271/base -> origin/gh/williamwen42/271/base 2025-08-14T21:24:06.1299169Z * [new branch] gh/williamwen42/271/head -> origin/gh/williamwen42/271/head 2025-08-14T21:24:06.1299323Z * [new branch] gh/williamwen42/271/orig -> origin/gh/williamwen42/271/orig 2025-08-14T21:24:06.1299480Z * [new branch] gh/williamwen42/272/base -> origin/gh/williamwen42/272/base 2025-08-14T21:24:06.1299634Z * [new branch] gh/williamwen42/272/head -> origin/gh/williamwen42/272/head 2025-08-14T21:24:06.1299939Z * [new branch] gh/williamwen42/272/orig -> origin/gh/williamwen42/272/orig 2025-08-14T21:24:06.1300123Z * [new branch] gh/williamwen42/273/base -> origin/gh/williamwen42/273/base 2025-08-14T21:24:06.1300293Z * [new branch] gh/williamwen42/273/head -> origin/gh/williamwen42/273/head 2025-08-14T21:24:06.1300508Z * [new branch] gh/williamwen42/273/orig -> origin/gh/williamwen42/273/orig 2025-08-14T21:24:06.1300678Z * [new branch] gh/williamwen42/274/base -> origin/gh/williamwen42/274/base 2025-08-14T21:24:06.1300843Z * [new branch] gh/williamwen42/274/head -> origin/gh/williamwen42/274/head 2025-08-14T21:24:06.1301031Z * [new branch] gh/williamwen42/274/orig -> origin/gh/williamwen42/274/orig 2025-08-14T21:24:06.1301186Z * [new branch] gh/williamwen42/275/base -> origin/gh/williamwen42/275/base 2025-08-14T21:24:06.1301348Z * [new branch] gh/williamwen42/275/head -> origin/gh/williamwen42/275/head 2025-08-14T21:24:06.1301518Z * [new branch] gh/williamwen42/276/base -> origin/gh/williamwen42/276/base 2025-08-14T21:24:06.1301684Z * [new branch] gh/williamwen42/276/head -> origin/gh/williamwen42/276/head 2025-08-14T21:24:06.1301884Z * [new branch] gh/williamwen42/276/orig -> origin/gh/williamwen42/276/orig 2025-08-14T21:24:06.1302054Z * [new branch] gh/williamwen42/277/base -> origin/gh/williamwen42/277/base 2025-08-14T21:24:06.1302491Z * [new branch] gh/williamwen42/277/head -> origin/gh/williamwen42/277/head 2025-08-14T21:24:06.1303258Z * [new branch] gh/williamwen42/277/orig -> origin/gh/williamwen42/277/orig 2025-08-14T21:24:06.1306282Z * [new branch] gh/williamwen42/278/base -> origin/gh/williamwen42/278/base 2025-08-14T21:24:06.1306482Z * [new branch] gh/williamwen42/278/head -> origin/gh/williamwen42/278/head 2025-08-14T21:24:06.1306647Z * [new branch] gh/williamwen42/278/orig -> origin/gh/williamwen42/278/orig 2025-08-14T21:24:06.1306814Z * [new branch] gh/williamwen42/279/base -> origin/gh/williamwen42/279/base 2025-08-14T21:24:06.1314260Z * [new branch] gh/williamwen42/279/head -> origin/gh/williamwen42/279/head 2025-08-14T21:24:06.1314589Z * [new branch] gh/williamwen42/279/orig -> origin/gh/williamwen42/279/orig 2025-08-14T21:24:06.1314746Z * [new branch] gh/xmfan/169/base -> origin/gh/xmfan/169/base 2025-08-14T21:24:06.1315188Z * [new branch] gh/xmfan/169/head -> origin/gh/xmfan/169/head 2025-08-14T21:24:06.1315371Z * [new branch] gh/xmfan/170/base -> origin/gh/xmfan/170/base 2025-08-14T21:24:06.1315506Z * [new branch] gh/xmfan/170/head -> origin/gh/xmfan/170/head 2025-08-14T21:24:06.1315645Z * [new branch] gh/xmfan/18/base -> origin/gh/xmfan/18/base 2025-08-14T21:24:06.1315786Z * [new branch] gh/xmfan/18/head -> origin/gh/xmfan/18/head 2025-08-14T21:24:06.1315919Z * [new branch] gh/xmfan/228/base -> origin/gh/xmfan/228/base 2025-08-14T21:24:06.1316059Z * [new branch] gh/xmfan/228/head -> origin/gh/xmfan/228/head 2025-08-14T21:24:06.1316207Z * [new branch] gh/xmfan/228/orig -> origin/gh/xmfan/228/orig 2025-08-14T21:24:06.1316500Z * [new branch] gh/xmfan/229/base -> origin/gh/xmfan/229/base 2025-08-14T21:24:06.1316655Z * [new branch] gh/xmfan/229/head -> origin/gh/xmfan/229/head 2025-08-14T21:24:06.1317104Z * [new branch] gh/xmfan/229/orig -> origin/gh/xmfan/229/orig 2025-08-14T21:24:06.1318487Z * [new branch] gh/xmfan/237/base -> origin/gh/xmfan/237/base 2025-08-14T21:24:06.1318657Z * [new branch] gh/xmfan/237/head -> origin/gh/xmfan/237/head 2025-08-14T21:24:06.1321123Z * [new branch] gh/xmfan/237/orig -> origin/gh/xmfan/237/orig 2025-08-14T21:24:06.1321463Z * [new branch] gh/xmfan/244/base -> origin/gh/xmfan/244/base 2025-08-14T21:24:06.1321661Z * [new branch] gh/xmfan/244/head -> origin/gh/xmfan/244/head 2025-08-14T21:24:06.1321992Z * [new branch] gh/xmfan/244/orig -> origin/gh/xmfan/244/orig 2025-08-14T21:24:06.1322428Z * [new branch] gh/xmfan/246/base -> origin/gh/xmfan/246/base 2025-08-14T21:24:06.1326685Z * [new branch] gh/xmfan/246/head -> origin/gh/xmfan/246/head 2025-08-14T21:24:06.1327001Z * [new branch] gh/xmfan/246/orig -> origin/gh/xmfan/246/orig 2025-08-14T21:24:06.1327164Z * [new branch] gh/xmfan/253/base -> origin/gh/xmfan/253/base 2025-08-14T21:24:06.1327309Z * [new branch] gh/xmfan/253/head -> origin/gh/xmfan/253/head 2025-08-14T21:24:06.1327567Z * [new branch] gh/xmfan/253/orig -> origin/gh/xmfan/253/orig 2025-08-14T21:24:06.1327776Z * [new branch] gh/xmfan/254/base -> origin/gh/xmfan/254/base 2025-08-14T21:24:06.1332707Z * [new branch] gh/xmfan/254/head -> origin/gh/xmfan/254/head 2025-08-14T21:24:06.1333064Z * [new branch] gh/xmfan/254/orig -> origin/gh/xmfan/254/orig 2025-08-14T21:24:06.1333206Z * [new branch] gh/xmfan/260/base -> origin/gh/xmfan/260/base 2025-08-14T21:24:06.1333342Z * [new branch] gh/xmfan/260/head -> origin/gh/xmfan/260/head 2025-08-14T21:24:06.1333489Z * [new branch] gh/xmfan/260/orig -> origin/gh/xmfan/260/orig 2025-08-14T21:24:06.1336321Z * [new branch] gh/xmfan/262/base -> origin/gh/xmfan/262/base 2025-08-14T21:24:06.1336597Z * [new branch] gh/xmfan/262/head -> origin/gh/xmfan/262/head 2025-08-14T21:24:06.1336742Z * [new branch] gh/xmfan/262/orig -> origin/gh/xmfan/262/orig 2025-08-14T21:24:06.1336892Z * [new branch] gh/xmfan/263/base -> origin/gh/xmfan/263/base 2025-08-14T21:24:06.1337031Z * [new branch] gh/xmfan/263/head -> origin/gh/xmfan/263/head 2025-08-14T21:24:06.1337217Z * [new branch] gh/xmfan/263/orig -> origin/gh/xmfan/263/orig 2025-08-14T21:24:06.1337353Z * [new branch] gh/xmfan/264/base -> origin/gh/xmfan/264/base 2025-08-14T21:24:06.1337491Z * [new branch] gh/xmfan/264/head -> origin/gh/xmfan/264/head 2025-08-14T21:24:06.1337636Z * [new branch] gh/xmfan/264/orig -> origin/gh/xmfan/264/orig 2025-08-14T21:24:06.1337768Z * [new branch] gh/xmfan/268/base -> origin/gh/xmfan/268/base 2025-08-14T21:24:06.1338076Z * [new branch] gh/xmfan/268/head -> origin/gh/xmfan/268/head 2025-08-14T21:24:06.1338521Z * [new branch] gh/xmfan/268/orig -> origin/gh/xmfan/268/orig 2025-08-14T21:24:06.1339970Z * [new branch] gh/xmfan/269/base -> origin/gh/xmfan/269/base 2025-08-14T21:24:06.1353877Z * [new branch] gh/xmfan/269/head -> origin/gh/xmfan/269/head 2025-08-14T21:24:06.1354074Z * [new branch] gh/xmfan/269/orig -> origin/gh/xmfan/269/orig 2025-08-14T21:24:06.1354234Z * [new branch] gh/xmfan/270/base -> origin/gh/xmfan/270/base 2025-08-14T21:24:06.1356307Z * [new branch] gh/xmfan/270/head -> origin/gh/xmfan/270/head 2025-08-14T21:24:06.1356488Z * [new branch] gh/xmfan/270/orig -> origin/gh/xmfan/270/orig 2025-08-14T21:24:06.1356631Z * [new branch] gh/xmfan/271/base -> origin/gh/xmfan/271/base 2025-08-14T21:24:06.1356792Z * [new branch] gh/xmfan/271/head -> origin/gh/xmfan/271/head 2025-08-14T21:24:06.1356927Z * [new branch] gh/xmfan/271/orig -> origin/gh/xmfan/271/orig 2025-08-14T21:24:06.1361916Z * [new branch] gh/xmfan/272/base -> origin/gh/xmfan/272/base 2025-08-14T21:24:06.1362097Z * [new branch] gh/xmfan/272/head -> origin/gh/xmfan/272/head 2025-08-14T21:24:06.1362485Z * [new branch] gh/xmfan/272/orig -> origin/gh/xmfan/272/orig 2025-08-14T21:24:06.1362636Z * [new branch] gh/xmfan/273/base -> origin/gh/xmfan/273/base 2025-08-14T21:24:06.1362768Z * [new branch] gh/xmfan/273/head -> origin/gh/xmfan/273/head 2025-08-14T21:24:06.1362907Z * [new branch] gh/xmfan/273/orig -> origin/gh/xmfan/273/orig 2025-08-14T21:24:06.1363080Z * [new branch] gh/xmfan/274/base -> origin/gh/xmfan/274/base 2025-08-14T21:24:06.1363257Z * [new branch] gh/xmfan/274/head -> origin/gh/xmfan/274/head 2025-08-14T21:24:06.1363753Z * [new branch] gh/xmfan/274/orig -> origin/gh/xmfan/274/orig 2025-08-14T21:24:06.1363931Z * [new branch] gh/xmfan/275/base -> origin/gh/xmfan/275/base 2025-08-14T21:24:06.1368246Z * [new branch] gh/xmfan/275/head -> origin/gh/xmfan/275/head 2025-08-14T21:24:06.1368764Z * [new branch] gh/xmfan/275/orig -> origin/gh/xmfan/275/orig 2025-08-14T21:24:06.1368922Z * [new branch] gh/xmfan/276/base -> origin/gh/xmfan/276/base 2025-08-14T21:24:06.1369143Z * [new branch] gh/xmfan/276/head -> origin/gh/xmfan/276/head 2025-08-14T21:24:06.1369302Z * [new branch] gh/xmfan/276/orig -> origin/gh/xmfan/276/orig 2025-08-14T21:24:06.1369519Z * [new branch] gh/xmfan/277/base -> origin/gh/xmfan/277/base 2025-08-14T21:24:06.1369703Z * [new branch] gh/xmfan/277/head -> origin/gh/xmfan/277/head 2025-08-14T21:24:06.1372535Z * [new branch] gh/xmfan/277/orig -> origin/gh/xmfan/277/orig 2025-08-14T21:24:06.1372725Z * [new branch] gh/xuanzhang816/12/base -> origin/gh/xuanzhang816/12/base 2025-08-14T21:24:06.1372997Z * [new branch] gh/xuanzhang816/12/head -> origin/gh/xuanzhang816/12/head 2025-08-14T21:24:06.1373169Z * [new branch] gh/xuanzhang816/12/orig -> origin/gh/xuanzhang816/12/orig 2025-08-14T21:24:06.1373415Z * [new branch] gh/xuanzhang816/14/base -> origin/gh/xuanzhang816/14/base 2025-08-14T21:24:06.1373585Z * [new branch] gh/xuanzhang816/14/head -> origin/gh/xuanzhang816/14/head 2025-08-14T21:24:06.1374338Z * [new branch] gh/xuanzhang816/14/orig -> origin/gh/xuanzhang816/14/orig 2025-08-14T21:24:06.1374795Z * [new branch] gh/xuanzhang816/18/base -> origin/gh/xuanzhang816/18/base 2025-08-14T21:24:06.1375697Z * [new branch] gh/xuanzhang816/18/head -> origin/gh/xuanzhang816/18/head 2025-08-14T21:24:06.1376103Z * [new branch] gh/xuanzhang816/18/orig -> origin/gh/xuanzhang816/18/orig 2025-08-14T21:24:06.1377823Z * [new branch] gh/xuanzhang816/19/base -> origin/gh/xuanzhang816/19/base 2025-08-14T21:24:06.1378142Z * [new branch] gh/xuanzhang816/19/head -> origin/gh/xuanzhang816/19/head 2025-08-14T21:24:06.1379138Z * [new branch] gh/xuanzhang816/19/orig -> origin/gh/xuanzhang816/19/orig 2025-08-14T21:24:06.1380275Z * [new branch] gh/xuanzhang816/20/base -> origin/gh/xuanzhang816/20/base 2025-08-14T21:24:06.1381310Z * [new branch] gh/xuanzhang816/20/head -> origin/gh/xuanzhang816/20/head 2025-08-14T21:24:06.1381958Z * [new branch] gh/xuanzhang816/20/orig -> origin/gh/xuanzhang816/20/orig 2025-08-14T21:24:06.1383050Z * [new branch] gh/xuanzhang816/21/base -> origin/gh/xuanzhang816/21/base 2025-08-14T21:24:06.1383604Z * [new branch] gh/xuanzhang816/21/head -> origin/gh/xuanzhang816/21/head 2025-08-14T21:24:06.1384055Z * [new branch] gh/xuanzhang816/21/orig -> origin/gh/xuanzhang816/21/orig 2025-08-14T21:24:06.1386012Z * [new branch] gh/xuanzhang816/22/base -> origin/gh/xuanzhang816/22/base 2025-08-14T21:24:06.1386391Z * [new branch] gh/xuanzhang816/22/head -> origin/gh/xuanzhang816/22/head 2025-08-14T21:24:06.1386560Z * [new branch] gh/xuanzhang816/22/orig -> origin/gh/xuanzhang816/22/orig 2025-08-14T21:24:06.1388105Z * [new branch] gh/xuanzhang816/23/base -> origin/gh/xuanzhang816/23/base 2025-08-14T21:24:06.1388273Z * [new branch] gh/xuanzhang816/23/head -> origin/gh/xuanzhang816/23/head 2025-08-14T21:24:06.1388851Z * [new branch] gh/xuanzhang816/23/orig -> origin/gh/xuanzhang816/23/orig 2025-08-14T21:24:06.1392850Z * [new branch] gh/xuanzhang816/24/base -> origin/gh/xuanzhang816/24/base 2025-08-14T21:24:06.1393043Z * [new branch] gh/xuanzhang816/24/head -> origin/gh/xuanzhang816/24/head 2025-08-14T21:24:06.1393207Z * [new branch] gh/xuanzhang816/24/orig -> origin/gh/xuanzhang816/24/orig 2025-08-14T21:24:06.1393528Z * [new branch] gh/yanbing-j/11/base -> origin/gh/yanbing-j/11/base 2025-08-14T21:24:06.1393856Z * [new branch] gh/yanbing-j/11/head -> origin/gh/yanbing-j/11/head 2025-08-14T21:24:06.1395273Z * [new branch] gh/yanbing-j/11/orig -> origin/gh/yanbing-j/11/orig 2025-08-14T21:24:06.1395616Z * [new branch] gh/yanbing-j/12/base -> origin/gh/yanbing-j/12/base 2025-08-14T21:24:06.1395794Z * [new branch] gh/yanbing-j/12/head -> origin/gh/yanbing-j/12/head 2025-08-14T21:24:06.1396153Z * [new branch] gh/yanbing-j/12/orig -> origin/gh/yanbing-j/12/orig 2025-08-14T21:24:06.1398476Z * [new branch] gh/yanbing-j/13/base -> origin/gh/yanbing-j/13/base 2025-08-14T21:24:06.1398658Z * [new branch] gh/yanbing-j/13/head -> origin/gh/yanbing-j/13/head 2025-08-14T21:24:06.1398814Z * [new branch] gh/yanbing-j/13/orig -> origin/gh/yanbing-j/13/orig 2025-08-14T21:24:06.1399768Z * [new branch] gh/yanbing-j/14/base -> origin/gh/yanbing-j/14/base 2025-08-14T21:24:06.1400368Z * [new branch] gh/yanbing-j/14/head -> origin/gh/yanbing-j/14/head 2025-08-14T21:24:06.1400941Z * [new branch] gh/yanbing-j/14/orig -> origin/gh/yanbing-j/14/orig 2025-08-14T21:24:06.1405156Z * [new branch] gh/yanbing-j/15/base -> origin/gh/yanbing-j/15/base 2025-08-14T21:24:06.1410631Z * [new branch] gh/yanbing-j/15/head -> origin/gh/yanbing-j/15/head 2025-08-14T21:24:06.1414540Z * [new branch] gh/yanbing-j/15/orig -> origin/gh/yanbing-j/15/orig 2025-08-14T21:24:06.1414749Z * [new branch] gh/yanbing-j/18/base -> origin/gh/yanbing-j/18/base 2025-08-14T21:24:06.1415227Z * [new branch] gh/yanbing-j/18/head -> origin/gh/yanbing-j/18/head 2025-08-14T21:24:06.1415410Z * [new branch] gh/yanbing-j/18/orig -> origin/gh/yanbing-j/18/orig 2025-08-14T21:24:06.1415579Z * [new branch] gh/yanbing-j/19/base -> origin/gh/yanbing-j/19/base 2025-08-14T21:24:06.1415721Z * [new branch] gh/yanbing-j/19/head -> origin/gh/yanbing-j/19/head 2025-08-14T21:24:06.1415872Z * [new branch] gh/yanbing-j/19/orig -> origin/gh/yanbing-j/19/orig 2025-08-14T21:24:06.1416022Z * [new branch] gh/yanbing-j/20/base -> origin/gh/yanbing-j/20/base 2025-08-14T21:24:06.1416175Z * [new branch] gh/yanbing-j/20/head -> origin/gh/yanbing-j/20/head 2025-08-14T21:24:06.1416324Z * [new branch] gh/yanbing-j/20/orig -> origin/gh/yanbing-j/20/orig 2025-08-14T21:24:06.1416472Z * [new branch] gh/yanbing-j/21/base -> origin/gh/yanbing-j/21/base 2025-08-14T21:24:06.1416621Z * [new branch] gh/yanbing-j/21/head -> origin/gh/yanbing-j/21/head 2025-08-14T21:24:06.1416764Z * [new branch] gh/yanbing-j/22/base -> origin/gh/yanbing-j/22/base 2025-08-14T21:24:06.1417117Z * [new branch] gh/yanbing-j/22/head -> origin/gh/yanbing-j/22/head 2025-08-14T21:24:06.1417274Z * [new branch] gh/yanbing-j/22/orig -> origin/gh/yanbing-j/22/orig 2025-08-14T21:24:06.1417424Z * [new branch] gh/yanbing-j/23/base -> origin/gh/yanbing-j/23/base 2025-08-14T21:24:06.1419043Z * [new branch] gh/yanbing-j/23/head -> origin/gh/yanbing-j/23/head 2025-08-14T21:24:06.1419276Z * [new branch] gh/yanbing-j/23/orig -> origin/gh/yanbing-j/23/orig 2025-08-14T21:24:06.1419444Z * [new branch] gh/yanbing-j/24/base -> origin/gh/yanbing-j/24/base 2025-08-14T21:24:06.1419789Z * [new branch] gh/yanbing-j/24/head -> origin/gh/yanbing-j/24/head 2025-08-14T21:24:06.1422922Z * [new branch] gh/yanbing-j/24/orig -> origin/gh/yanbing-j/24/orig 2025-08-14T21:24:06.1426859Z * [new branch] gh/yanbing-j/25/base -> origin/gh/yanbing-j/25/base 2025-08-14T21:24:06.1431872Z * [new branch] gh/yanbing-j/25/head -> origin/gh/yanbing-j/25/head 2025-08-14T21:24:06.1434825Z * [new branch] gh/yanbing-j/25/orig -> origin/gh/yanbing-j/25/orig 2025-08-14T21:24:06.1435014Z * [new branch] gh/yanbing-j/26/base -> origin/gh/yanbing-j/26/base 2025-08-14T21:24:06.1435523Z * [new branch] gh/yanbing-j/26/head -> origin/gh/yanbing-j/26/head 2025-08-14T21:24:06.1435679Z * [new branch] gh/yanbing-j/26/orig -> origin/gh/yanbing-j/26/orig 2025-08-14T21:24:06.1435833Z * [new branch] gh/yanbing-j/36/base -> origin/gh/yanbing-j/36/base 2025-08-14T21:24:06.1435978Z * [new branch] gh/yanbing-j/36/head -> origin/gh/yanbing-j/36/head 2025-08-14T21:24:06.1436120Z * [new branch] gh/yanbing-j/36/orig -> origin/gh/yanbing-j/36/orig 2025-08-14T21:24:06.1436310Z * [new branch] gh/yanbing-j/37/base -> origin/gh/yanbing-j/37/base 2025-08-14T21:24:06.1436453Z * [new branch] gh/yanbing-j/37/head -> origin/gh/yanbing-j/37/head 2025-08-14T21:24:06.1436596Z * [new branch] gh/yanbing-j/37/orig -> origin/gh/yanbing-j/37/orig 2025-08-14T21:24:06.1436733Z * [new branch] gh/yanbing-j/39/base -> origin/gh/yanbing-j/39/base 2025-08-14T21:24:06.1436870Z * [new branch] gh/yanbing-j/39/head -> origin/gh/yanbing-j/39/head 2025-08-14T21:24:06.1437022Z * [new branch] gh/yanbing-j/39/orig -> origin/gh/yanbing-j/39/orig 2025-08-14T21:24:06.1437183Z * [new branch] gh/yangw-dev/1/base -> origin/gh/yangw-dev/1/base 2025-08-14T21:24:06.1437326Z * [new branch] gh/yangw-dev/10/base -> origin/gh/yangw-dev/10/base 2025-08-14T21:24:06.1437470Z * [new branch] gh/yangw-dev/10/head -> origin/gh/yangw-dev/10/head 2025-08-14T21:24:06.1437617Z * [new branch] gh/yangw-dev/10/orig -> origin/gh/yangw-dev/10/orig 2025-08-14T21:24:06.1437762Z * [new branch] gh/yangw-dev/11/base -> origin/gh/yangw-dev/11/base 2025-08-14T21:24:06.1437900Z * [new branch] gh/yangw-dev/11/head -> origin/gh/yangw-dev/11/head 2025-08-14T21:24:06.1438041Z * [new branch] gh/yangw-dev/11/orig -> origin/gh/yangw-dev/11/orig 2025-08-14T21:24:06.1438183Z * [new branch] gh/yangw-dev/12/base -> origin/gh/yangw-dev/12/base 2025-08-14T21:24:06.1439304Z * [new branch] gh/yangw-dev/12/head -> origin/gh/yangw-dev/12/head 2025-08-14T21:24:06.1442801Z * [new branch] gh/yangw-dev/12/orig -> origin/gh/yangw-dev/12/orig 2025-08-14T21:24:06.1442979Z * [new branch] gh/yangw-dev/13/base -> origin/gh/yangw-dev/13/base 2025-08-14T21:24:06.1443146Z * [new branch] gh/yangw-dev/13/head -> origin/gh/yangw-dev/13/head 2025-08-14T21:24:06.1444891Z * [new branch] gh/yangw-dev/13/orig -> origin/gh/yangw-dev/13/orig 2025-08-14T21:24:06.1445044Z * [new branch] gh/yangw-dev/14/base -> origin/gh/yangw-dev/14/base 2025-08-14T21:24:06.1445191Z * [new branch] gh/yangw-dev/14/head -> origin/gh/yangw-dev/14/head 2025-08-14T21:24:06.1445330Z * [new branch] gh/yangw-dev/14/orig -> origin/gh/yangw-dev/14/orig 2025-08-14T21:24:06.1448436Z * [new branch] gh/yangw-dev/15/base -> origin/gh/yangw-dev/15/base 2025-08-14T21:24:06.1448983Z * [new branch] gh/yangw-dev/15/head -> origin/gh/yangw-dev/15/head 2025-08-14T21:24:06.1449171Z * [new branch] gh/yangw-dev/15/orig -> origin/gh/yangw-dev/15/orig 2025-08-14T21:24:06.1449327Z * [new branch] gh/yangw-dev/16/base -> origin/gh/yangw-dev/16/base 2025-08-14T21:24:06.1449813Z * [new branch] gh/yangw-dev/16/head -> origin/gh/yangw-dev/16/head 2025-08-14T21:24:06.1455763Z * [new branch] gh/yangw-dev/16/orig -> origin/gh/yangw-dev/16/orig 2025-08-14T21:24:06.1455960Z * [new branch] gh/yangw-dev/17/base -> origin/gh/yangw-dev/17/base 2025-08-14T21:24:06.1456111Z * [new branch] gh/yangw-dev/17/head -> origin/gh/yangw-dev/17/head 2025-08-14T21:24:06.1456264Z * [new branch] gh/yangw-dev/17/orig -> origin/gh/yangw-dev/17/orig 2025-08-14T21:24:06.1456405Z * [new branch] gh/yangw-dev/18/base -> origin/gh/yangw-dev/18/base 2025-08-14T21:24:06.1456565Z * [new branch] gh/yangw-dev/18/head -> origin/gh/yangw-dev/18/head 2025-08-14T21:24:06.1456706Z * [new branch] gh/yangw-dev/18/orig -> origin/gh/yangw-dev/18/orig 2025-08-14T21:24:06.1456844Z * [new branch] gh/yangw-dev/19/base -> origin/gh/yangw-dev/19/base 2025-08-14T21:24:06.1457003Z * [new branch] gh/yangw-dev/19/head -> origin/gh/yangw-dev/19/head 2025-08-14T21:24:06.1457143Z * [new branch] gh/yangw-dev/19/orig -> origin/gh/yangw-dev/19/orig 2025-08-14T21:24:06.1457300Z * [new branch] gh/yangw-dev/2/base -> origin/gh/yangw-dev/2/base 2025-08-14T21:24:06.1463010Z * [new branch] gh/yangw-dev/2/head -> origin/gh/yangw-dev/2/head 2025-08-14T21:24:06.1466996Z * [new branch] gh/yangw-dev/3/base -> origin/gh/yangw-dev/3/base 2025-08-14T21:24:06.1471820Z * [new branch] gh/yangw-dev/3/head -> origin/gh/yangw-dev/3/head 2025-08-14T21:24:06.1475867Z * [new branch] gh/yangw-dev/4/base -> origin/gh/yangw-dev/4/base 2025-08-14T21:24:06.1479938Z * [new branch] gh/yangw-dev/4/head -> origin/gh/yangw-dev/4/head 2025-08-14T21:24:06.1480117Z * [new branch] gh/yangw-dev/5/base -> origin/gh/yangw-dev/5/base 2025-08-14T21:24:06.1480271Z * [new branch] gh/yangw-dev/5/head -> origin/gh/yangw-dev/5/head 2025-08-14T21:24:06.1480409Z * [new branch] gh/yangw-dev/6/base -> origin/gh/yangw-dev/6/base 2025-08-14T21:24:06.1480562Z * [new branch] gh/yangw-dev/6/head -> origin/gh/yangw-dev/6/head 2025-08-14T21:24:06.1480699Z * [new branch] gh/yangw-dev/7/base -> origin/gh/yangw-dev/7/base 2025-08-14T21:24:06.1480827Z * [new branch] gh/yangw-dev/7/head -> origin/gh/yangw-dev/7/head 2025-08-14T21:24:06.1480953Z * [new branch] gh/yangw-dev/8/base -> origin/gh/yangw-dev/8/base 2025-08-14T21:24:06.1481085Z * [new branch] gh/yangw-dev/8/head -> origin/gh/yangw-dev/8/head 2025-08-14T21:24:06.1481209Z * [new branch] gh/yangw-dev/8/orig -> origin/gh/yangw-dev/8/orig 2025-08-14T21:24:06.1481343Z * [new branch] gh/yangw-dev/9/base -> origin/gh/yangw-dev/9/base 2025-08-14T21:24:06.1481602Z * [new branch] gh/yangw-dev/9/head -> origin/gh/yangw-dev/9/head 2025-08-14T21:24:06.1481737Z * [new branch] gh/yangw-dev/9/orig -> origin/gh/yangw-dev/9/orig 2025-08-14T21:24:06.1481877Z * [new branch] gh/ydwu4/233/base -> origin/gh/ydwu4/233/base 2025-08-14T21:24:06.1482002Z * [new branch] gh/ydwu4/233/head -> origin/gh/ydwu4/233/head 2025-08-14T21:24:06.1482130Z * [new branch] gh/ydwu4/233/orig -> origin/gh/ydwu4/233/orig 2025-08-14T21:24:06.1482249Z * [new branch] gh/ydwu4/246/base -> origin/gh/ydwu4/246/base 2025-08-14T21:24:06.1482368Z * [new branch] gh/ydwu4/246/head -> origin/gh/ydwu4/246/head 2025-08-14T21:24:06.1482492Z * [new branch] gh/ydwu4/246/orig -> origin/gh/ydwu4/246/orig 2025-08-14T21:24:06.1482662Z * [new branch] gh/ydwu4/253/base -> origin/gh/ydwu4/253/base 2025-08-14T21:24:06.1482801Z * [new branch] gh/ydwu4/253/head -> origin/gh/ydwu4/253/head 2025-08-14T21:24:06.1482918Z * [new branch] gh/ydwu4/253/orig -> origin/gh/ydwu4/253/orig 2025-08-14T21:24:06.1483036Z * [new branch] gh/ydwu4/255/base -> origin/gh/ydwu4/255/base 2025-08-14T21:24:06.1483164Z * [new branch] gh/ydwu4/255/head -> origin/gh/ydwu4/255/head 2025-08-14T21:24:06.1483286Z * [new branch] gh/ydwu4/255/orig -> origin/gh/ydwu4/255/orig 2025-08-14T21:24:06.1483407Z * [new branch] gh/ydwu4/259/base -> origin/gh/ydwu4/259/base 2025-08-14T21:24:06.1483536Z * [new branch] gh/ydwu4/259/head -> origin/gh/ydwu4/259/head 2025-08-14T21:24:06.1483656Z * [new branch] gh/ydwu4/259/orig -> origin/gh/ydwu4/259/orig 2025-08-14T21:24:06.1483785Z * [new branch] gh/ydwu4/262/base -> origin/gh/ydwu4/262/base 2025-08-14T21:24:06.1483904Z * [new branch] gh/ydwu4/262/head -> origin/gh/ydwu4/262/head 2025-08-14T21:24:06.1488401Z * [new branch] gh/ydwu4/262/orig -> origin/gh/ydwu4/262/orig 2025-08-14T21:24:06.1488713Z * [new branch] gh/ydwu4/263/base -> origin/gh/ydwu4/263/base 2025-08-14T21:24:06.1488887Z * [new branch] gh/ydwu4/263/head -> origin/gh/ydwu4/263/head 2025-08-14T21:24:06.1489020Z * [new branch] gh/ydwu4/263/orig -> origin/gh/ydwu4/263/orig 2025-08-14T21:24:06.1489271Z * [new branch] gh/ydwu4/269/base -> origin/gh/ydwu4/269/base 2025-08-14T21:24:06.1489392Z * [new branch] gh/ydwu4/269/head -> origin/gh/ydwu4/269/head 2025-08-14T21:24:06.1489522Z * [new branch] gh/ydwu4/269/orig -> origin/gh/ydwu4/269/orig 2025-08-14T21:24:06.1489781Z * [new branch] gh/ydwu4/270/base -> origin/gh/ydwu4/270/base 2025-08-14T21:24:06.1490440Z * [new branch] gh/ydwu4/270/head -> origin/gh/ydwu4/270/head 2025-08-14T21:24:06.1490614Z * [new branch] gh/ydwu4/270/orig -> origin/gh/ydwu4/270/orig 2025-08-14T21:24:06.1490744Z * [new branch] gh/ydwu4/272/base -> origin/gh/ydwu4/272/base 2025-08-14T21:24:06.1490883Z * [new branch] gh/ydwu4/272/head -> origin/gh/ydwu4/272/head 2025-08-14T21:24:06.1491008Z * [new branch] gh/ydwu4/272/orig -> origin/gh/ydwu4/272/orig 2025-08-14T21:24:06.1493339Z * [new branch] gh/ydwu4/275/base -> origin/gh/ydwu4/275/base 2025-08-14T21:24:06.1493642Z * [new branch] gh/ydwu4/275/head -> origin/gh/ydwu4/275/head 2025-08-14T21:24:06.1493786Z * [new branch] gh/ydwu4/275/orig -> origin/gh/ydwu4/275/orig 2025-08-14T21:24:06.1493927Z * [new branch] gh/ydwu4/276/base -> origin/gh/ydwu4/276/base 2025-08-14T21:24:06.1494329Z * [new branch] gh/ydwu4/276/head -> origin/gh/ydwu4/276/head 2025-08-14T21:24:06.1494580Z * [new branch] gh/ydwu4/276/orig -> origin/gh/ydwu4/276/orig 2025-08-14T21:24:06.1497128Z * [new branch] gh/ydwu4/277/base -> origin/gh/ydwu4/277/base 2025-08-14T21:24:06.1497712Z * [new branch] gh/ydwu4/277/head -> origin/gh/ydwu4/277/head 2025-08-14T21:24:06.1497883Z * [new branch] gh/ydwu4/277/orig -> origin/gh/ydwu4/277/orig 2025-08-14T21:24:06.1498017Z * [new branch] gh/ydwu4/278/base -> origin/gh/ydwu4/278/base 2025-08-14T21:24:06.1498157Z * [new branch] gh/ydwu4/278/head -> origin/gh/ydwu4/278/head 2025-08-14T21:24:06.1498286Z * [new branch] gh/ydwu4/278/orig -> origin/gh/ydwu4/278/orig 2025-08-14T21:24:06.1498773Z * [new branch] gh/ydwu4/279/base -> origin/gh/ydwu4/279/base 2025-08-14T21:24:06.1498962Z * [new branch] gh/ydwu4/279/head -> origin/gh/ydwu4/279/head 2025-08-14T21:24:06.1500122Z * [new branch] gh/ydwu4/279/orig -> origin/gh/ydwu4/279/orig 2025-08-14T21:24:06.1506709Z * [new branch] gh/ydwu4/280/base -> origin/gh/ydwu4/280/base 2025-08-14T21:24:06.1506879Z * [new branch] gh/ydwu4/280/head -> origin/gh/ydwu4/280/head 2025-08-14T21:24:06.1507024Z * [new branch] gh/ydwu4/280/orig -> origin/gh/ydwu4/280/orig 2025-08-14T21:24:06.1507152Z * [new branch] gh/ydwu4/281/base -> origin/gh/ydwu4/281/base 2025-08-14T21:24:06.1507278Z * [new branch] gh/ydwu4/281/head -> origin/gh/ydwu4/281/head 2025-08-14T21:24:06.1507412Z * [new branch] gh/ydwu4/281/orig -> origin/gh/ydwu4/281/orig 2025-08-14T21:24:06.1511984Z * [new branch] gh/ydwu4/282/base -> origin/gh/ydwu4/282/base 2025-08-14T21:24:06.1514080Z * [new branch] gh/ydwu4/282/head -> origin/gh/ydwu4/282/head 2025-08-14T21:24:06.1514350Z * [new branch] gh/ydwu4/282/orig -> origin/gh/ydwu4/282/orig 2025-08-14T21:24:06.1519950Z * [new branch] gh/ydwu4/283/base -> origin/gh/ydwu4/283/base 2025-08-14T21:24:06.1524069Z * [new branch] gh/ydwu4/283/head -> origin/gh/ydwu4/283/head 2025-08-14T21:24:06.1529461Z * [new branch] gh/ydwu4/283/orig -> origin/gh/ydwu4/283/orig 2025-08-14T21:24:06.1534869Z * [new branch] gh/ydwu4/284/base -> origin/gh/ydwu4/284/base 2025-08-14T21:24:06.1535027Z * [new branch] gh/ydwu4/284/head -> origin/gh/ydwu4/284/head 2025-08-14T21:24:06.1535185Z * [new branch] gh/ydwu4/284/orig -> origin/gh/ydwu4/284/orig 2025-08-14T21:24:06.1535326Z * [new branch] gh/ydwu4/285/base -> origin/gh/ydwu4/285/base 2025-08-14T21:24:06.1535490Z * [new branch] gh/ydwu4/285/head -> origin/gh/ydwu4/285/head 2025-08-14T21:24:06.1535618Z * [new branch] gh/ydwu4/285/orig -> origin/gh/ydwu4/285/orig 2025-08-14T21:24:06.1535743Z * [new branch] gh/ydwu4/286/base -> origin/gh/ydwu4/286/base 2025-08-14T21:24:06.1535876Z * [new branch] gh/ydwu4/286/head -> origin/gh/ydwu4/286/head 2025-08-14T21:24:06.1536000Z * [new branch] gh/ydwu4/286/orig -> origin/gh/ydwu4/286/orig 2025-08-14T21:24:06.1536135Z * [new branch] gh/ydwu4/287/base -> origin/gh/ydwu4/287/base 2025-08-14T21:24:06.1536261Z * [new branch] gh/ydwu4/287/head -> origin/gh/ydwu4/287/head 2025-08-14T21:24:06.1536385Z * [new branch] gh/ydwu4/287/orig -> origin/gh/ydwu4/287/orig 2025-08-14T21:24:06.1536519Z * [new branch] gh/ydwu4/288/base -> origin/gh/ydwu4/288/base 2025-08-14T21:24:06.1536815Z * [new branch] gh/ydwu4/288/head -> origin/gh/ydwu4/288/head 2025-08-14T21:24:06.1536962Z * [new branch] gh/ydwu4/288/orig -> origin/gh/ydwu4/288/orig 2025-08-14T21:24:06.1537100Z * [new branch] gh/ydwu4/289/base -> origin/gh/ydwu4/289/base 2025-08-14T21:24:06.1537230Z * [new branch] gh/ydwu4/289/head -> origin/gh/ydwu4/289/head 2025-08-14T21:24:06.1537367Z * [new branch] gh/ydwu4/289/orig -> origin/gh/ydwu4/289/orig 2025-08-14T21:24:06.1537495Z * [new branch] gh/ydwu4/290/base -> origin/gh/ydwu4/290/base 2025-08-14T21:24:06.1537623Z * [new branch] gh/ydwu4/290/head -> origin/gh/ydwu4/290/head 2025-08-14T21:24:06.1537760Z * [new branch] gh/ydwu4/290/orig -> origin/gh/ydwu4/290/orig 2025-08-14T21:24:06.1537968Z * [new branch] gh/ydwu4/291/base -> origin/gh/ydwu4/291/base 2025-08-14T21:24:06.1538108Z * [new branch] gh/ydwu4/291/head -> origin/gh/ydwu4/291/head 2025-08-14T21:24:06.1538236Z * [new branch] gh/ydwu4/291/orig -> origin/gh/ydwu4/291/orig 2025-08-14T21:24:06.1538364Z * [new branch] gh/ydwu4/292/base -> origin/gh/ydwu4/292/base 2025-08-14T21:24:06.1538497Z * [new branch] gh/ydwu4/292/head -> origin/gh/ydwu4/292/head 2025-08-14T21:24:06.1538625Z * [new branch] gh/ydwu4/292/orig -> origin/gh/ydwu4/292/orig 2025-08-14T21:24:06.1538760Z * [new branch] gh/ydwu4/293/base -> origin/gh/ydwu4/293/base 2025-08-14T21:24:06.1538888Z * [new branch] gh/ydwu4/293/head -> origin/gh/ydwu4/293/head 2025-08-14T21:24:06.1539076Z * [new branch] gh/ydwu4/293/orig -> origin/gh/ydwu4/293/orig 2025-08-14T21:24:06.1539231Z * [new branch] gh/ydwu4/294/base -> origin/gh/ydwu4/294/base 2025-08-14T21:24:06.1539361Z * [new branch] gh/ydwu4/294/head -> origin/gh/ydwu4/294/head 2025-08-14T21:24:06.1539490Z * [new branch] gh/ydwu4/294/orig -> origin/gh/ydwu4/294/orig 2025-08-14T21:24:06.1539624Z * [new branch] gh/ydwu4/295/base -> origin/gh/ydwu4/295/base 2025-08-14T21:24:06.1540002Z * [new branch] gh/ydwu4/295/head -> origin/gh/ydwu4/295/head 2025-08-14T21:24:06.1540145Z * [new branch] gh/ydwu4/295/orig -> origin/gh/ydwu4/295/orig 2025-08-14T21:24:06.1540274Z * [new branch] gh/ydwu4/296/base -> origin/gh/ydwu4/296/base 2025-08-14T21:24:06.1540404Z * [new branch] gh/ydwu4/296/head -> origin/gh/ydwu4/296/head 2025-08-14T21:24:06.1540545Z * [new branch] gh/ydwu4/296/orig -> origin/gh/ydwu4/296/orig 2025-08-14T21:24:06.1540690Z * [new branch] gh/ydwu4/297/base -> origin/gh/ydwu4/297/base 2025-08-14T21:24:06.1540836Z * [new branch] gh/ydwu4/297/head -> origin/gh/ydwu4/297/head 2025-08-14T21:24:06.1540970Z * [new branch] gh/ydwu4/297/orig -> origin/gh/ydwu4/297/orig 2025-08-14T21:24:06.1541103Z * [new branch] gh/ydwu4/298/base -> origin/gh/ydwu4/298/base 2025-08-14T21:24:06.1541252Z * [new branch] gh/ydwu4/298/head -> origin/gh/ydwu4/298/head 2025-08-14T21:24:06.1541381Z * [new branch] gh/ydwu4/298/orig -> origin/gh/ydwu4/298/orig 2025-08-14T21:24:06.1546172Z * [new branch] gh/ydwu4/299/base -> origin/gh/ydwu4/299/base 2025-08-14T21:24:06.1551217Z * [new branch] gh/ydwu4/299/head -> origin/gh/ydwu4/299/head 2025-08-14T21:24:06.1556199Z * [new branch] gh/ydwu4/299/orig -> origin/gh/ydwu4/299/orig 2025-08-14T21:24:06.1560643Z * [new branch] gh/ydwu4/300/base -> origin/gh/ydwu4/300/base 2025-08-14T21:24:06.1562275Z * [new branch] gh/ydwu4/300/head -> origin/gh/ydwu4/300/head 2025-08-14T21:24:06.1562864Z * [new branch] gh/ydwu4/300/orig -> origin/gh/ydwu4/300/orig 2025-08-14T21:24:06.1563011Z * [new branch] gh/ydwu4/301/base -> origin/gh/ydwu4/301/base 2025-08-14T21:24:06.1563141Z * [new branch] gh/ydwu4/301/head -> origin/gh/ydwu4/301/head 2025-08-14T21:24:06.1563281Z * [new branch] gh/ydwu4/301/orig -> origin/gh/ydwu4/301/orig 2025-08-14T21:24:06.1563418Z * [new branch] gh/ydwu4/302/base -> origin/gh/ydwu4/302/base 2025-08-14T21:24:06.1563545Z * [new branch] gh/ydwu4/302/head -> origin/gh/ydwu4/302/head 2025-08-14T21:24:06.1563686Z * [new branch] gh/ydwu4/302/orig -> origin/gh/ydwu4/302/orig 2025-08-14T21:24:06.1563896Z * [new branch] gh/ydwu4/303/base -> origin/gh/ydwu4/303/base 2025-08-14T21:24:06.1564030Z * [new branch] gh/ydwu4/303/head -> origin/gh/ydwu4/303/head 2025-08-14T21:24:06.1564167Z * [new branch] gh/ydwu4/303/orig -> origin/gh/ydwu4/303/orig 2025-08-14T21:24:06.1564300Z * [new branch] gh/ydwu4/304/base -> origin/gh/ydwu4/304/base 2025-08-14T21:24:06.1564446Z * [new branch] gh/ydwu4/304/head -> origin/gh/ydwu4/304/head 2025-08-14T21:24:06.1564573Z * [new branch] gh/ydwu4/304/orig -> origin/gh/ydwu4/304/orig 2025-08-14T21:24:06.1564708Z * [new branch] gh/ydwu4/305/base -> origin/gh/ydwu4/305/base 2025-08-14T21:24:06.1564840Z * [new branch] gh/ydwu4/305/head -> origin/gh/ydwu4/305/head 2025-08-14T21:24:06.1564971Z * [new branch] gh/ydwu4/305/orig -> origin/gh/ydwu4/305/orig 2025-08-14T21:24:06.1565112Z * [new branch] gh/ydwu4/306/base -> origin/gh/ydwu4/306/base 2025-08-14T21:24:06.1565245Z * [new branch] gh/ydwu4/306/head -> origin/gh/ydwu4/306/head 2025-08-14T21:24:06.1565374Z * [new branch] gh/ydwu4/306/orig -> origin/gh/ydwu4/306/orig 2025-08-14T21:24:06.1565512Z * [new branch] gh/ydwu4/307/base -> origin/gh/ydwu4/307/base 2025-08-14T21:24:06.1565640Z * [new branch] gh/ydwu4/307/head -> origin/gh/ydwu4/307/head 2025-08-14T21:24:06.1565774Z * [new branch] gh/ydwu4/307/orig -> origin/gh/ydwu4/307/orig 2025-08-14T21:24:06.1565904Z * [new branch] gh/ydwu4/308/base -> origin/gh/ydwu4/308/base 2025-08-14T21:24:06.1566033Z * [new branch] gh/ydwu4/308/head -> origin/gh/ydwu4/308/head 2025-08-14T21:24:06.1566166Z * [new branch] gh/ydwu4/308/orig -> origin/gh/ydwu4/308/orig 2025-08-14T21:24:06.1569396Z * [new branch] gh/ydwu4/309/base -> origin/gh/ydwu4/309/base 2025-08-14T21:24:06.1569844Z * [new branch] gh/ydwu4/309/head -> origin/gh/ydwu4/309/head 2025-08-14T21:24:06.1569994Z * [new branch] gh/ydwu4/309/orig -> origin/gh/ydwu4/309/orig 2025-08-14T21:24:06.1570134Z * [new branch] gh/ydwu4/310/base -> origin/gh/ydwu4/310/base 2025-08-14T21:24:06.1571458Z * [new branch] gh/ydwu4/310/head -> origin/gh/ydwu4/310/head 2025-08-14T21:24:06.1571747Z * [new branch] gh/ydwu4/310/orig -> origin/gh/ydwu4/310/orig 2025-08-14T21:24:06.1571983Z * [new branch] gh/ydwu4/311/base -> origin/gh/ydwu4/311/base 2025-08-14T21:24:06.1572124Z * [new branch] gh/ydwu4/311/head -> origin/gh/ydwu4/311/head 2025-08-14T21:24:06.1572316Z * [new branch] gh/ydwu4/311/orig -> origin/gh/ydwu4/311/orig 2025-08-14T21:24:06.1572467Z * [new branch] gh/yf225/133/base -> origin/gh/yf225/133/base 2025-08-14T21:24:06.1572745Z * [new branch] gh/yf225/133/head -> origin/gh/yf225/133/head 2025-08-14T21:24:06.1572894Z * [new branch] gh/yf225/171/base -> origin/gh/yf225/171/base 2025-08-14T21:24:06.1573485Z * [new branch] gh/yf225/171/head -> origin/gh/yf225/171/head 2025-08-14T21:24:06.1574555Z * [new branch] gh/yf225/171/orig -> origin/gh/yf225/171/orig 2025-08-14T21:24:06.1575480Z * [new branch] gh/yf225/172/base -> origin/gh/yf225/172/base 2025-08-14T21:24:06.1575815Z * [new branch] gh/yf225/172/head -> origin/gh/yf225/172/head 2025-08-14T21:24:06.1576952Z * [new branch] gh/yf225/172/orig -> origin/gh/yf225/172/orig 2025-08-14T21:24:06.1577671Z * [new branch] gh/yf225/93/base -> origin/gh/yf225/93/base 2025-08-14T21:24:06.1578356Z * [new branch] gh/yf225/93/head -> origin/gh/yf225/93/head 2025-08-14T21:24:06.1580751Z * [new branch] gh/yifuwang/152/base -> origin/gh/yifuwang/152/base 2025-08-14T21:24:06.1582375Z * [new branch] gh/yifuwang/152/head -> origin/gh/yifuwang/152/head 2025-08-14T21:24:06.1582827Z * [new branch] gh/yifuwang/152/orig -> origin/gh/yifuwang/152/orig 2025-08-14T21:24:06.1583016Z * [new branch] gh/yifuwang/195/base -> origin/gh/yifuwang/195/base 2025-08-14T21:24:06.1584112Z * [new branch] gh/yifuwang/195/head -> origin/gh/yifuwang/195/head 2025-08-14T21:24:06.1584681Z * [new branch] gh/yifuwang/195/orig -> origin/gh/yifuwang/195/orig 2025-08-14T21:24:06.1586022Z * [new branch] gh/yiming0416/1/base -> origin/gh/yiming0416/1/base 2025-08-14T21:24:06.1586366Z * [new branch] gh/yiming0416/1/head -> origin/gh/yiming0416/1/head 2025-08-14T21:24:06.1590090Z * [new branch] gh/yiming0416/2/base -> origin/gh/yiming0416/2/base 2025-08-14T21:24:06.1590285Z * [new branch] gh/yiming0416/2/head -> origin/gh/yiming0416/2/head 2025-08-14T21:24:06.1590446Z * [new branch] gh/ysiraichi/79/base -> origin/gh/ysiraichi/79/base 2025-08-14T21:24:06.1590668Z * [new branch] gh/ysiraichi/79/head -> origin/gh/ysiraichi/79/head 2025-08-14T21:24:06.1590823Z * [new branch] gh/ysiraichi/79/orig -> origin/gh/ysiraichi/79/orig 2025-08-14T21:24:06.1597471Z * [new branch] gh/ysiraichi/81/base -> origin/gh/ysiraichi/81/base 2025-08-14T21:24:06.1597673Z * [new branch] gh/ysiraichi/81/head -> origin/gh/ysiraichi/81/head 2025-08-14T21:24:06.1597821Z * [new branch] gh/ysiraichi/81/orig -> origin/gh/ysiraichi/81/orig 2025-08-14T21:24:06.1597963Z * [new branch] gh/ysiraichi/84/base -> origin/gh/ysiraichi/84/base 2025-08-14T21:24:06.1598170Z * [new branch] gh/ysiraichi/84/head -> origin/gh/ysiraichi/84/head 2025-08-14T21:24:06.1598311Z * [new branch] gh/ysiraichi/84/orig -> origin/gh/ysiraichi/84/orig 2025-08-14T21:24:06.1598581Z * [new branch] gh/ysiraichi/85/base -> origin/gh/ysiraichi/85/base 2025-08-14T21:24:06.1598721Z * [new branch] gh/ysiraichi/85/head -> origin/gh/ysiraichi/85/head 2025-08-14T21:24:06.1598857Z * [new branch] gh/ysiraichi/85/orig -> origin/gh/ysiraichi/85/orig 2025-08-14T21:24:06.1599006Z * [new branch] gh/ysiraichi/86/base -> origin/gh/ysiraichi/86/base 2025-08-14T21:24:06.1599153Z * [new branch] gh/ysiraichi/86/head -> origin/gh/ysiraichi/86/head 2025-08-14T21:24:06.1603025Z * [new branch] gh/ysiraichi/86/orig -> origin/gh/ysiraichi/86/orig 2025-08-14T21:24:06.1603205Z * [new branch] gh/ysiraichi/87/base -> origin/gh/ysiraichi/87/base 2025-08-14T21:24:06.1603587Z * [new branch] gh/ysiraichi/87/head -> origin/gh/ysiraichi/87/head 2025-08-14T21:24:06.1603743Z * [new branch] gh/ysiraichi/87/orig -> origin/gh/ysiraichi/87/orig 2025-08-14T21:24:06.1604001Z * [new branch] gh/ysiraichi/88/base -> origin/gh/ysiraichi/88/base 2025-08-14T21:24:06.1604168Z * [new branch] gh/ysiraichi/88/head -> origin/gh/ysiraichi/88/head 2025-08-14T21:24:06.1609442Z * [new branch] gh/ysiraichi/88/orig -> origin/gh/ysiraichi/88/orig 2025-08-14T21:24:06.1609631Z * [new branch] gh/yuguo68/1/base -> origin/gh/yuguo68/1/base 2025-08-14T21:24:06.1609782Z * [new branch] gh/yuguo68/1/head -> origin/gh/yuguo68/1/head 2025-08-14T21:24:06.1609919Z * [new branch] gh/yuguo68/1/orig -> origin/gh/yuguo68/1/orig 2025-08-14T21:24:06.1610253Z * [new branch] gh/yuguo68/2/base -> origin/gh/yuguo68/2/base 2025-08-14T21:24:06.1612855Z * [new branch] gh/yuguo68/2/head -> origin/gh/yuguo68/2/head 2025-08-14T21:24:06.1613103Z * [new branch] gh/yuguo68/2/orig -> origin/gh/yuguo68/2/orig 2025-08-14T21:24:06.1613254Z * [new branch] gh/zhxchen17/25/base -> origin/gh/zhxchen17/25/base 2025-08-14T21:24:06.1613399Z * [new branch] gh/zhxchen17/25/head -> origin/gh/zhxchen17/25/head 2025-08-14T21:24:06.1613539Z * [new branch] gh/zhxchen17/25/orig -> origin/gh/zhxchen17/25/orig 2025-08-14T21:24:06.1614620Z * [new branch] gh/zhxchen17/31/base -> origin/gh/zhxchen17/31/base 2025-08-14T21:24:06.1615073Z * [new branch] gh/zhxchen17/31/head -> origin/gh/zhxchen17/31/head 2025-08-14T21:24:06.1615259Z * [new branch] gh/zhxchen17/31/orig -> origin/gh/zhxchen17/31/orig 2025-08-14T21:24:06.1615446Z * [new branch] gh/zhxchen17/33/base -> origin/gh/zhxchen17/33/base 2025-08-14T21:24:06.1615617Z * [new branch] gh/zhxchen17/33/head -> origin/gh/zhxchen17/33/head 2025-08-14T21:24:06.1615781Z * [new branch] gh/zhxchen17/33/orig -> origin/gh/zhxchen17/33/orig 2025-08-14T21:24:06.1618945Z * [new branch] gh/zhxchen17/34/base -> origin/gh/zhxchen17/34/base 2025-08-14T21:24:06.1619190Z * [new branch] gh/zhxchen17/34/head -> origin/gh/zhxchen17/34/head 2025-08-14T21:24:06.1619338Z * [new branch] gh/zhxchen17/35/base -> origin/gh/zhxchen17/35/base 2025-08-14T21:24:06.1619492Z * [new branch] gh/zhxchen17/35/head -> origin/gh/zhxchen17/35/head 2025-08-14T21:24:06.1619630Z * [new branch] gh/zhxchen17/36/base -> origin/gh/zhxchen17/36/base 2025-08-14T21:24:06.1624432Z * [new branch] gh/zhxchen17/36/head -> origin/gh/zhxchen17/36/head 2025-08-14T21:24:06.1624615Z * [new branch] gh/zhxchen17/36/orig -> origin/gh/zhxchen17/36/orig 2025-08-14T21:24:06.1625242Z * [new branch] gh/zklaus/1/base -> origin/gh/zklaus/1/base 2025-08-14T21:24:06.1625529Z * [new branch] gh/zklaus/1/head -> origin/gh/zklaus/1/head 2025-08-14T21:24:06.1625700Z * [new branch] gh/zklaus/1/orig -> origin/gh/zklaus/1/orig 2025-08-14T21:24:06.1627098Z * [new branch] gh/zklaus/10/base -> origin/gh/zklaus/10/base 2025-08-14T21:24:06.1627288Z * [new branch] gh/zklaus/10/head -> origin/gh/zklaus/10/head 2025-08-14T21:24:06.1627450Z * [new branch] gh/zklaus/10/orig -> origin/gh/zklaus/10/orig 2025-08-14T21:24:06.1627594Z * [new branch] gh/zklaus/11/base -> origin/gh/zklaus/11/base 2025-08-14T21:24:06.1627747Z * [new branch] gh/zklaus/11/head -> origin/gh/zklaus/11/head 2025-08-14T21:24:06.1628096Z * [new branch] gh/zklaus/11/orig -> origin/gh/zklaus/11/orig 2025-08-14T21:24:06.1633697Z * [new branch] gh/zklaus/12/base -> origin/gh/zklaus/12/base 2025-08-14T21:24:06.1633894Z * [new branch] gh/zklaus/12/head -> origin/gh/zklaus/12/head 2025-08-14T21:24:06.1634036Z * [new branch] gh/zklaus/12/orig -> origin/gh/zklaus/12/orig 2025-08-14T21:24:06.1634185Z * [new branch] gh/zklaus/14/base -> origin/gh/zklaus/14/base 2025-08-14T21:24:06.1634310Z * [new branch] gh/zklaus/14/head -> origin/gh/zklaus/14/head 2025-08-14T21:24:06.1634436Z * [new branch] gh/zklaus/14/orig -> origin/gh/zklaus/14/orig 2025-08-14T21:24:06.1634572Z * [new branch] gh/zklaus/15/base -> origin/gh/zklaus/15/base 2025-08-14T21:24:06.1634695Z * [new branch] gh/zklaus/15/head -> origin/gh/zklaus/15/head 2025-08-14T21:24:06.1634994Z * [new branch] gh/zklaus/15/orig -> origin/gh/zklaus/15/orig 2025-08-14T21:24:06.1635122Z * [new branch] gh/zklaus/16/base -> origin/gh/zklaus/16/base 2025-08-14T21:24:06.1639903Z * [new branch] gh/zklaus/16/head -> origin/gh/zklaus/16/head 2025-08-14T21:24:06.1640089Z * [new branch] gh/zklaus/16/orig -> origin/gh/zklaus/16/orig 2025-08-14T21:24:06.1640243Z * [new branch] gh/zklaus/17/base -> origin/gh/zklaus/17/base 2025-08-14T21:24:06.1640392Z * [new branch] gh/zklaus/17/head -> origin/gh/zklaus/17/head 2025-08-14T21:24:06.1640531Z * [new branch] gh/zklaus/17/orig -> origin/gh/zklaus/17/orig 2025-08-14T21:24:06.1640670Z * [new branch] gh/zklaus/18/base -> origin/gh/zklaus/18/base 2025-08-14T21:24:06.1640811Z * [new branch] gh/zklaus/18/head -> origin/gh/zklaus/18/head 2025-08-14T21:24:06.1642539Z * [new branch] gh/zklaus/18/orig -> origin/gh/zklaus/18/orig 2025-08-14T21:24:06.1642696Z * [new branch] gh/zklaus/19/base -> origin/gh/zklaus/19/base 2025-08-14T21:24:06.1642833Z * [new branch] gh/zklaus/19/head -> origin/gh/zklaus/19/head 2025-08-14T21:24:06.1642972Z * [new branch] gh/zklaus/19/orig -> origin/gh/zklaus/19/orig 2025-08-14T21:24:06.1650233Z * [new branch] gh/zklaus/7/base -> origin/gh/zklaus/7/base 2025-08-14T21:24:06.1655756Z * [new branch] gh/zklaus/7/head -> origin/gh/zklaus/7/head 2025-08-14T21:24:06.1656085Z * [new branch] gh/zklaus/7/orig -> origin/gh/zklaus/7/orig 2025-08-14T21:24:06.1656252Z * [new branch] gh/zklaus/9/base -> origin/gh/zklaus/9/base 2025-08-14T21:24:06.1656401Z * [new branch] gh/zklaus/9/head -> origin/gh/zklaus/9/head 2025-08-14T21:24:06.1656558Z * [new branch] gh/zklaus/9/orig -> origin/gh/zklaus/9/orig 2025-08-14T21:24:06.1656847Z * [new branch] gh/zou3519/1175/base -> origin/gh/zou3519/1175/base 2025-08-14T21:24:06.1657016Z * [new branch] gh/zou3519/1175/head -> origin/gh/zou3519/1175/head 2025-08-14T21:24:06.1657608Z * [new branch] gh/zou3519/1175/orig -> origin/gh/zou3519/1175/orig 2025-08-14T21:24:06.1657811Z * [new branch] gh/zou3519/1177/base -> origin/gh/zou3519/1177/base 2025-08-14T21:24:06.1657961Z * [new branch] gh/zou3519/1177/head -> origin/gh/zou3519/1177/head 2025-08-14T21:24:06.1658104Z * [new branch] gh/zou3519/1177/orig -> origin/gh/zou3519/1177/orig 2025-08-14T21:24:06.1658254Z * [new branch] gh/zou3519/1187/base -> origin/gh/zou3519/1187/base 2025-08-14T21:24:06.1658394Z * [new branch] gh/zou3519/1187/head -> origin/gh/zou3519/1187/head 2025-08-14T21:24:06.1658773Z * [new branch] gh/zou3519/1187/orig -> origin/gh/zou3519/1187/orig 2025-08-14T21:24:06.1658926Z * [new branch] gh/zou3519/1188/base -> origin/gh/zou3519/1188/base 2025-08-14T21:24:06.1659069Z * [new branch] gh/zou3519/1188/head -> origin/gh/zou3519/1188/head 2025-08-14T21:24:06.1659216Z * [new branch] gh/zou3519/1188/orig -> origin/gh/zou3519/1188/orig 2025-08-14T21:24:06.1659354Z * [new branch] gh/zou3519/1189/base -> origin/gh/zou3519/1189/base 2025-08-14T21:24:06.1659500Z * [new branch] gh/zou3519/1189/head -> origin/gh/zou3519/1189/head 2025-08-14T21:24:06.1659635Z * [new branch] gh/zou3519/1189/orig -> origin/gh/zou3519/1189/orig 2025-08-14T21:24:06.1660117Z * [new branch] gh/zou3519/1190/base -> origin/gh/zou3519/1190/base 2025-08-14T21:24:06.1666339Z * [new branch] gh/zou3519/1190/head -> origin/gh/zou3519/1190/head 2025-08-14T21:24:06.1668412Z * [new branch] gh/zou3519/1190/orig -> origin/gh/zou3519/1190/orig 2025-08-14T21:24:06.1668695Z * [new branch] gh/zou3519/1191/base -> origin/gh/zou3519/1191/base 2025-08-14T21:24:06.1671856Z * [new branch] gh/zou3519/1191/head -> origin/gh/zou3519/1191/head 2025-08-14T21:24:06.1672111Z * [new branch] gh/zou3519/1191/orig -> origin/gh/zou3519/1191/orig 2025-08-14T21:24:06.1677070Z * [new branch] gh/zpcore/1/base -> origin/gh/zpcore/1/base 2025-08-14T21:24:06.1682501Z * [new branch] gh/zpcore/1/head -> origin/gh/zpcore/1/head 2025-08-14T21:24:06.1684569Z * [new branch] gh/zpcore/10/base -> origin/gh/zpcore/10/base 2025-08-14T21:24:06.1684842Z * [new branch] gh/zpcore/10/head -> origin/gh/zpcore/10/head 2025-08-14T21:24:06.1688272Z * [new branch] gh/zpcore/10/orig -> origin/gh/zpcore/10/orig 2025-08-14T21:24:06.1688589Z * [new branch] gh/zpcore/11/base -> origin/gh/zpcore/11/base 2025-08-14T21:24:06.1688763Z * [new branch] gh/zpcore/11/head -> origin/gh/zpcore/11/head 2025-08-14T21:24:06.1688903Z * [new branch] gh/zpcore/11/orig -> origin/gh/zpcore/11/orig 2025-08-14T21:24:06.1689031Z * [new branch] gh/zpcore/12/base -> origin/gh/zpcore/12/base 2025-08-14T21:24:06.1689156Z * [new branch] gh/zpcore/12/head -> origin/gh/zpcore/12/head 2025-08-14T21:24:06.1689288Z * [new branch] gh/zpcore/12/orig -> origin/gh/zpcore/12/orig 2025-08-14T21:24:06.1689424Z * [new branch] gh/zpcore/2/base -> origin/gh/zpcore/2/base 2025-08-14T21:24:06.1689555Z * [new branch] gh/zpcore/2/head -> origin/gh/zpcore/2/head 2025-08-14T21:24:06.1689692Z * [new branch] gh/zpcore/3/base -> origin/gh/zpcore/3/base 2025-08-14T21:24:06.1689817Z * [new branch] gh/zpcore/3/head -> origin/gh/zpcore/3/head 2025-08-14T21:24:06.1689947Z * [new branch] gh/zpcore/4/base -> origin/gh/zpcore/4/base 2025-08-14T21:24:06.1690068Z * [new branch] gh/zpcore/4/head -> origin/gh/zpcore/4/head 2025-08-14T21:24:06.1690191Z * [new branch] gh/zpcore/5/base -> origin/gh/zpcore/5/base 2025-08-14T21:24:06.1694506Z * [new branch] gh/zpcore/5/head -> origin/gh/zpcore/5/head 2025-08-14T21:24:06.1694835Z * [new branch] gh/zpcore/6/base -> origin/gh/zpcore/6/base 2025-08-14T21:24:06.1695004Z * [new branch] gh/zpcore/6/head -> origin/gh/zpcore/6/head 2025-08-14T21:24:06.1695141Z * [new branch] gh/zpcore/7/base -> origin/gh/zpcore/7/base 2025-08-14T21:24:06.1695425Z * [new branch] gh/zpcore/7/head -> origin/gh/zpcore/7/head 2025-08-14T21:24:06.1695786Z * [new branch] gh/zpcore/8/base -> origin/gh/zpcore/8/base 2025-08-14T21:24:06.1695938Z * [new branch] gh/zpcore/8/head -> origin/gh/zpcore/8/head 2025-08-14T21:24:06.1696087Z * [new branch] gh/zpcore/9/head -> origin/gh/zpcore/9/head 2025-08-14T21:24:06.1696229Z * [new branch] gh/zpcore/9/orig -> origin/gh/zpcore/9/orig 2025-08-14T21:24:06.1696366Z * [new branch] google-main -> origin/google-main 2025-08-14T21:24:06.1696546Z * [new branch] guangyey/external_stream -> origin/guangyey/external_stream 2025-08-14T21:24:06.1696699Z * [new branch] guangyey/host_alloc -> origin/guangyey/host_alloc 2025-08-14T21:24:06.1696863Z * [new branch] guangyey/test_2025 -> origin/guangyey/test_2025 2025-08-14T21:24:06.1697213Z * [new branch] guilhermeleobas/cherry-pick-55d87d9dfd9 -> origin/guilhermeleobas/cherry-pick-55d87d9dfd9 2025-08-14T21:24:06.1697384Z * [new branch] haozhe/bf16-dynamic-shape -> origin/haozhe/bf16-dynamic-shape 2025-08-14T21:24:06.1697519Z * [new branch] hc_baseline -> origin/hc_baseline 2025-08-14T21:24:06.1697678Z * [new branch] headeronlyScalarType -> origin/headeronlyScalarType 2025-08-14T21:24:06.1697809Z * [new branch] hf_update -> origin/hf_update 2025-08-14T21:24:06.1697940Z * [new branch] hhh_decomp_mul -> origin/hhh_decomp_mul 2025-08-14T21:24:06.1698062Z * [new branch] hhh_rand -> origin/hhh_rand 2025-08-14T21:24:06.1698200Z * [new branch] hoy/mmsplitk -> origin/hoy/mmsplitk 2025-08-14T21:24:06.1698345Z * [new branch] hoy/triton-PR3973 -> origin/hoy/triton-PR3973 2025-08-14T21:24:06.1698549Z * [new branch] hoy/triton-coalescing-baseline -> origin/hoy/triton-coalescing-baseline 2025-08-14T21:24:06.1698738Z * [new branch] hoy/triton-coalescing-min -> origin/hoy/triton-coalescing-min 2025-08-14T21:24:06.1698907Z * [new branch] hoy/triton-coalescing-new -> origin/hoy/triton-coalescing-new 2025-08-14T21:24:06.1699074Z * [new branch] hoy/triton-coalescing-vec -> origin/hoy/triton-coalescing-vec 2025-08-14T21:24:06.1699401Z * [new branch] inductordecompfix -> origin/inductordecompfix 2025-08-14T21:24:06.1700438Z * [new branch] inline -> origin/inline 2025-08-14T21:24:06.1704744Z * [new branch] inlining -> origin/inlining 2025-08-14T21:24:06.1704935Z * [new branch] inlining-ezyang -> origin/inlining-ezyang 2025-08-14T21:24:06.1705080Z * [new branch] int8_sdpa -> origin/int8_sdpa 2025-08-14T21:24:06.1705270Z * [new branch] invoke-subgraph -> origin/invoke-subgraph 2025-08-14T21:24:06.1705401Z * [new branch] issue#58739 -> origin/issue#58739 2025-08-14T21:24:06.1705539Z * [new branch] issue-154849 -> origin/issue-154849 2025-08-14T21:24:06.1710784Z * [new branch] ivanov/cherry-pick-ckpt-fixes -> origin/ivanov/cherry-pick-ckpt-fixes 2025-08-14T21:24:06.1713051Z * [new branch] jcaip/test-cusparselt-version-0.6.2 -> origin/jcaip/test-cusparselt-version-0.6.2 2025-08-14T21:24:06.1719205Z * [new branch] jcaip/update-cusparselt-0.6.2 -> origin/jcaip/update-cusparselt-0.6.2 2025-08-14T21:24:06.1724308Z * [new branch] jithunnair-amd-patch-1 -> origin/jithunnair-amd-patch-1 2025-08-14T21:24:06.1729647Z * [new branch] justinchu/attention-tests -> origin/justinchu/attention-tests 2025-08-14T21:24:06.1732351Z * [new branch] justinchu/native-qdq -> origin/justinchu/native-qdq 2025-08-14T21:24:06.1732851Z * [new branch] justinchuby/JitScalarType -> origin/justinchuby/JitScalarType 2025-08-14T21:24:06.1736626Z * [new branch] justinchuby/dynamo-true -> origin/justinchuby/dynamo-true 2025-08-14T21:24:06.1740758Z * [new branch] justinchuby/opset-20 -> origin/justinchuby/opset-20 2025-08-14T21:24:06.1745213Z * [new branch] kainan666/xlf_debug -> origin/kainan666/xlf_debug 2025-08-14T21:24:06.1747859Z * [new branch] kainan_test -> origin/kainan_test 2025-08-14T21:24:06.1748281Z * [new branch] leslie/enable_poc_reduction_fusion -> origin/leslie/enable_poc_reduction_fusion 2025-08-14T21:24:06.1748506Z * [new branch] leslie/test_group_gemm_epilogues -> origin/leslie/test_group_gemm_epilogues 2025-08-14T21:24:06.1748717Z * [new branch] lessw2020/fix_cutlass_cache_error -> origin/lessw2020/fix_cutlass_cache_error 2025-08-14T21:24:06.1749184Z * [new branch] liaoxuan/shm_all_reduce -> origin/liaoxuan/shm_all_reduce 2025-08-14T21:24:06.1749341Z * [new branch] liaoxuan/tags_issue -> origin/liaoxuan/tags_issue 2025-08-14T21:24:06.1749544Z * [new branch] liaoxuan/test_fa_disable_softmax -> origin/liaoxuan/test_fa_disable_softmax 2025-08-14T21:24:06.1749718Z * [new branch] liaoxuan/test_int8_sdpa -> origin/liaoxuan/test_int8_sdpa 2025-08-14T21:24:06.1749877Z * [new branch] lintbuilddocker -> origin/lintbuilddocker 2025-08-14T21:24:06.1750045Z * [new branch] llama4-stable -> origin/llama4-stable 2025-08-14T21:24:06.1750181Z * [new branch] logdetfix -> origin/logdetfix 2025-08-14T21:24:06.1750328Z * [new branch] lts/release/1.8 -> origin/lts/release/1.8 2025-08-14T21:24:06.1750498Z * [new branch] lucaskabela/#94773 -> origin/lucaskabela/#94773 2025-08-14T21:24:06.1750674Z * [new branch] lucaskabela/fix_157452 -> origin/lucaskabela/fix_157452 2025-08-14T21:24:06.1750912Z * [new branch] lucaskabela/fix_circular_import_158120 -> origin/lucaskabela/fix_circular_import_158120 2025-08-14T21:24:06.1751100Z * [new branch] lucaskabela/func_under_decomp -> origin/lucaskabela/func_under_decomp 2025-08-14T21:24:06.1751318Z * [new branch] lucaskabela/functional_in_dynamo -> origin/lucaskabela/functional_in_dynamo 2025-08-14T21:24:06.1751563Z * [new branch] lucaskabela/install_params_as_graph_attr -> origin/lucaskabela/install_params_as_graph_attr 2025-08-14T21:24:06.1751734Z * [new branch] lucaskabela/issue_120648 -> origin/lucaskabela/issue_120648 2025-08-14T21:24:06.1751962Z * [new branch] lucaskabela/parameters_as_graph_attr -> origin/lucaskabela/parameters_as_graph_attr 2025-08-14T21:24:06.1752141Z * [new branch] lucaskabela/registry_fix -> origin/lucaskabela/registry_fix 2025-08-14T21:24:06.1752398Z * [new branch] lucaskabela/remove_aot_dispatcher_metadata -> origin/lucaskabela/remove_aot_dispatcher_metadata 2025-08-14T21:24:06.1752600Z * [new branch] lucaskabela/type_guards -> origin/lucaskabela/type_guards 2025-08-14T21:24:06.1752770Z * [new branch] lucaskabela/typing-misc -> origin/lucaskabela/typing-misc 2025-08-14T21:24:06.1752964Z * [new branch] lucaskabela/typing_backends -> origin/lucaskabela/typing_backends 2025-08-14T21:24:06.1753234Z * [new branch] lucaskabela/typing_bytecode_analysis_transform -> origin/lucaskabela/typing_bytecode_analysis_transform 2025-08-14T21:24:06.1753427Z * [new branch] lucaskabela/typing_cache_files -> origin/lucaskabela/typing_cache_files 2025-08-14T21:24:06.1753649Z * [new branch] lucaskabela/typing_compile_autograd -> origin/lucaskabela/typing_compile_autograd 2025-08-14T21:24:06.1753926Z * [new branch] lucaskabela/typing_debug_utils.py -> origin/lucaskabela/typing_debug_utils.py 2025-08-14T21:24:06.1754123Z * [new branch] lucaskabela/typing_decorators -> origin/lucaskabela/typing_decorators 2025-08-14T21:24:06.1754316Z * [new branch] lucaskabela/typing_eval_frame -> origin/lucaskabela/typing_eval_frame 2025-08-14T21:24:06.1754505Z * [new branch] lucaskabela/typing_for_codegen -> origin/lucaskabela/typing_for_codegen 2025-08-14T21:24:06.1754702Z * [new branch] lucaskabela/typing_output_graph -> origin/lucaskabela/typing_output_graph 2025-08-14T21:24:06.1754906Z * [new branch] lucaskabela/typing_side_effects -> origin/lucaskabela/typing_side_effects 2025-08-14T21:24:06.1755109Z * [new branch] lucaskabela/typing_source_guard -> origin/lucaskabela/typing_source_guard 2025-08-14T21:24:06.1755299Z * [new branch] lucaskabela/typing_trace_rules -> origin/lucaskabela/typing_trace_rules 2025-08-14T21:24:06.1755546Z * [new branch] lucaskabela/typing_utils.py -> origin/lucaskabela/typing_utils.py 2025-08-14T21:24:06.1755777Z * [new branch] lucaskabela/typing_utils_improvements -> origin/lucaskabela/typing_utils_improvements 2025-08-14T21:24:06.1755907Z * [new branch] main -> origin/main 2025-08-14T21:24:06.1756150Z * [new branch] main-enable-b200-distributed-tests -> origin/main-enable-b200-distributed-tests 2025-08-14T21:24:06.1756304Z * [new branch] malfet-patch-1 -> origin/malfet-patch-1 2025-08-14T21:24:06.1756465Z * [new branch] malfet-patch-10 -> origin/malfet-patch-10 2025-08-14T21:24:06.1756614Z * [new branch] malfet-patch-11 -> origin/malfet-patch-11 2025-08-14T21:24:06.1756753Z * [new branch] malfet-patch-13 -> origin/malfet-patch-13 2025-08-14T21:24:06.1756892Z * [new branch] malfet-patch-14 -> origin/malfet-patch-14 2025-08-14T21:24:06.1757038Z * [new branch] malfet-patch-2 -> origin/malfet-patch-2 2025-08-14T21:24:06.1757171Z * [new branch] malfet-patch-3 -> origin/malfet-patch-3 2025-08-14T21:24:06.1757314Z * [new branch] malfet-patch-4 -> origin/malfet-patch-4 2025-08-14T21:24:06.1757448Z * [new branch] malfet-patch-5 -> origin/malfet-patch-5 2025-08-14T21:24:06.1757578Z * [new branch] malfet-patch-6 -> origin/malfet-patch-6 2025-08-14T21:24:06.1757717Z * [new branch] malfet-patch-7 -> origin/malfet-patch-7 2025-08-14T21:24:06.1757849Z * [new branch] malfet-patch-8 -> origin/malfet-patch-8 2025-08-14T21:24:06.1757989Z * [new branch] malfet-patch-9 -> origin/malfet-patch-9 2025-08-14T21:24:06.1758420Z * [new branch] malfet/delete-upsteam-cuda -> origin/malfet/delete-upsteam-cuda 2025-08-14T21:24:06.1758621Z * [new branch] malfet/mps-implement-col2im -> origin/malfet/mps-implement-col2im 2025-08-14T21:24:06.1758839Z * [new branch] manuel/fix_multidim_boolean_indexing -> origin/manuel/fix_multidim_boolean_indexing 2025-08-14T21:24:06.1759002Z * [new branch] manuel/np_empty_ellipsis -> origin/manuel/np_empty_ellipsis 2025-08-14T21:24:06.1759352Z * [new branch] manuel/test-ops-common-allow-mps -> origin/manuel/test-ops-common-allow-mps 2025-08-14T21:24:06.1759958Z * [new branch] metascroy-patch-1 -> origin/metascroy-patch-1 2025-08-14T21:24:06.1763467Z * [new branch] mlazos/S429861-debug -> origin/mlazos/S429861-debug 2025-08-14T21:24:06.1763630Z * [new branch] mlazos/aa -> origin/mlazos/aa 2025-08-14T21:24:06.1764102Z * [new branch] mlazos/arg-renames -> origin/mlazos/arg-renames 2025-08-14T21:24:06.1764519Z * [new branch] mlazos/backup-test-branch -> origin/mlazos/backup-test-branch 2025-08-14T21:24:06.1764703Z * [new branch] mlazos/bad-cudagraphs -> origin/mlazos/bad-cudagraphs 2025-08-14T21:24:06.1770250Z * [new branch] mlazos/baseline -> origin/mlazos/baseline 2025-08-14T21:24:06.1775213Z * [new branch] mlazos/baseline-graph-breaks -> origin/mlazos/baseline-graph-breaks 2025-08-14T21:24:06.1775396Z * [new branch] mlazos/beta-tensor -> origin/mlazos/beta-tensor 2025-08-14T21:24:06.1775849Z * [new branch] mlazos/buffers -> origin/mlazos/buffers 2025-08-14T21:24:06.1776018Z * [new branch] mlazos/buffers2 -> origin/mlazos/buffers2 2025-08-14T21:24:06.1776176Z * [new branch] mlazos/buffers3 -> origin/mlazos/buffers3 2025-08-14T21:24:06.1776317Z * [new branch] mlazos/ck2 -> origin/mlazos/ck2 2025-08-14T21:24:06.1776636Z * [new branch] mlazos/combokernels -> origin/mlazos/combokernels 2025-08-14T21:24:06.1776808Z * [new branch] mlazos/ctx-cleanup -> origin/mlazos/ctx-cleanup 2025-08-14T21:24:06.1776974Z * [new branch] mlazos/cudagraph-tests -> origin/mlazos/cudagraph-tests 2025-08-14T21:24:06.1777195Z * [new branch] mlazos/cudagraphs-measurement -> origin/mlazos/cudagraphs-measurement 2025-08-14T21:24:06.1777368Z * [new branch] mlazos/cutlass-test -> origin/mlazos/cutlass-test 2025-08-14T21:24:06.1777548Z * [new branch] mlazos/cutlass-topo-bug -> origin/mlazos/cutlass-topo-bug 2025-08-14T21:24:06.1777703Z * [new branch] mlazos/data-gather -> origin/mlazos/data-gather 2025-08-14T21:24:06.1777856Z * [new branch] mlazos/data-ptrs2 -> origin/mlazos/data-ptrs2 2025-08-14T21:24:06.1777997Z * [new branch] mlazos/data-ptrs3 -> origin/mlazos/data-ptrs3 2025-08-14T21:24:06.1778179Z * [new branch] mlazos/dataclass-proxy -> origin/mlazos/dataclass-proxy 2025-08-14T21:24:06.1778316Z * [new branch] mlazos/dc-attrs -> origin/mlazos/dc-attrs 2025-08-14T21:24:06.1778460Z * [new branch] mlazos/dc-helion -> origin/mlazos/dc-helion 2025-08-14T21:24:06.1778611Z * [new branch] mlazos/dict-fix -> origin/mlazos/dict-fix 2025-08-14T21:24:06.1778786Z * [new branch] mlazos/disable-closures -> origin/mlazos/disable-closures 2025-08-14T21:24:06.1778938Z * [new branch] mlazos/disable-tf -> origin/mlazos/disable-tf 2025-08-14T21:24:06.1779074Z * [new branch] mlazos/dupe-fix -> origin/mlazos/dupe-fix 2025-08-14T21:24:06.1779244Z * [new branch] mlazos/dyn-batch -> origin/mlazos/dyn-batch 2025-08-14T21:24:06.1780601Z * [new branch] mlazos/evt -> origin/mlazos/evt 2025-08-14T21:24:06.1785740Z * [new branch] mlazos/exp_disable -> origin/mlazos/exp_disable 2025-08-14T21:24:06.1790068Z * [new branch] mlazos/extract-examples -> origin/mlazos/extract-examples 2025-08-14T21:24:06.1794342Z * [new branch] mlazos/foreach-op -> origin/mlazos/foreach-op 2025-08-14T21:24:06.1798616Z * [new branch] mlazos/fp8 -> origin/mlazos/fp8 2025-08-14T21:24:06.1802597Z * [new branch] mlazos/fp8-bias -> origin/mlazos/fp8-bias 2025-08-14T21:24:06.1807610Z * [new branch] mlazos/fp8-bias-fusion -> origin/mlazos/fp8-bias-fusion 2025-08-14T21:24:06.1810695Z * [new branch] mlazos/freezing -> origin/mlazos/freezing 2025-08-14T21:24:06.1814939Z * [new branch] mlazos/h-comp -> origin/mlazos/h-comp 2025-08-14T21:24:06.1817781Z * [new branch] mlazos/h-comp2 -> origin/mlazos/h-comp2 2025-08-14T21:24:06.1818152Z * [new branch] mlazos/hash-hop -> origin/mlazos/hash-hop 2025-08-14T21:24:06.1818307Z * [new branch] mlazos/hc -> origin/mlazos/hc 2025-08-14T21:24:06.1818456Z * [new branch] mlazos/hc-cycles -> origin/mlazos/hc-cycles 2025-08-14T21:24:06.1818598Z * [new branch] mlazos/hc-fixes -> origin/mlazos/hc-fixes 2025-08-14T21:24:06.1818734Z * [new branch] mlazos/hc-fixes3 -> origin/mlazos/hc-fixes3 2025-08-14T21:24:06.1818865Z * [new branch] mlazos/hc-fixes4 -> origin/mlazos/hc-fixes4 2025-08-14T21:24:06.1819003Z * [new branch] mlazos/hc-hf -> origin/mlazos/hc-hf 2025-08-14T21:24:06.1819130Z * [new branch] mlazos/hc-mut -> origin/mlazos/hc-mut 2025-08-14T21:24:06.1819265Z * [new branch] mlazos/hc10 -> origin/mlazos/hc10 2025-08-14T21:24:06.1819446Z * [new branch] mlazos/hc11 -> origin/mlazos/hc11 2025-08-14T21:24:06.1819567Z * [new branch] mlazos/hc12 -> origin/mlazos/hc12 2025-08-14T21:24:06.1819877Z * [new branch] mlazos/hc13 -> origin/mlazos/hc13 2025-08-14T21:24:06.1820075Z * [new branch] mlazos/hc14 -> origin/mlazos/hc14 2025-08-14T21:24:06.1820237Z * [new branch] mlazos/hc15 -> origin/mlazos/hc15 2025-08-14T21:24:06.1820431Z * [new branch] mlazos/hc2 -> origin/mlazos/hc2 2025-08-14T21:24:06.1820611Z * [new branch] mlazos/hc4 -> origin/mlazos/hc4 2025-08-14T21:24:06.1820806Z * [new branch] mlazos/hc5 -> origin/mlazos/hc5 2025-08-14T21:24:06.1820981Z * [new branch] mlazos/hc6 -> origin/mlazos/hc6 2025-08-14T21:24:06.1821181Z * [new branch] mlazos/hc7 -> origin/mlazos/hc7 2025-08-14T21:24:06.1821356Z * [new branch] mlazos/hc8 -> origin/mlazos/hc8 2025-08-14T21:24:06.1821516Z * [new branch] mlazos/hc9 -> origin/mlazos/hc9 2025-08-14T21:24:06.1821666Z * [new branch] mlazos/hc_baseline2 -> origin/mlazos/hc_baseline2 2025-08-14T21:24:06.1821794Z * [new branch] mlazos/hop-modes -> origin/mlazos/hop-modes 2025-08-14T21:24:06.1821949Z * [new branch] mlazos/init-per-param -> origin/mlazos/init-per-param 2025-08-14T21:24:06.1822097Z * [new branch] mlazos/init_per_param -> origin/mlazos/init_per_param 2025-08-14T21:24:06.1822236Z * [new branch] mlazos/less-guards -> origin/mlazos/less-guards 2025-08-14T21:24:06.1822392Z * [new branch] mlazos/lr-composibility -> origin/mlazos/lr-composibility 2025-08-14T21:24:06.1822527Z * [new branch] mlazos/main -> origin/mlazos/main 2025-08-14T21:24:06.1822704Z * [new branch] mlazos/main-test-enablement -> origin/mlazos/main-test-enablement 2025-08-14T21:24:06.1822838Z * [new branch] mlazos/main2 -> origin/mlazos/main2 2025-08-14T21:24:06.1822954Z * [new branch] mlazos/mcg -> origin/mlazos/mcg 2025-08-14T21:24:06.1823072Z * [new branch] mlazos/mcg2 -> origin/mlazos/mcg2 2025-08-14T21:24:06.1823221Z * [new branch] mlazos/meta-guards -> origin/mlazos/meta-guards 2025-08-14T21:24:06.1823359Z * [new branch] mlazos/mlazos/ck2 -> origin/mlazos/mlazos/ck2 2025-08-14T21:24:06.1823559Z * [new branch] mlazos/mlazos/foreach-map-adam -> origin/mlazos/mlazos/foreach-map-adam 2025-08-14T21:24:06.1823730Z * [new branch] mlazos/mlazos/tf-mode-backup -> origin/mlazos/mlazos/tf-mode-backup 2025-08-14T21:24:06.1823864Z * [new branch] mlazos/mod-fix -> origin/mlazos/mod-fix 2025-08-14T21:24:06.1824069Z * [new branch] mlazos/mode-fix -> origin/mlazos/mode-fix 2025-08-14T21:24:06.1824211Z * [new branch] mlazos/more-tests -> origin/mlazos/more-tests 2025-08-14T21:24:06.1824349Z * [new branch] mlazos/nested-dc -> origin/mlazos/nested-dc 2025-08-14T21:24:06.1824488Z * [new branch] mlazos/no-cpp -> origin/mlazos/no-cpp 2025-08-14T21:24:06.1824674Z * [new branch] mlazos/no-init-group-handling -> origin/mlazos/no-init-group-handling 2025-08-14T21:24:06.1824814Z * [new branch] mlazos/offsets -> origin/mlazos/offsets 2025-08-14T21:24:06.1824967Z * [new branch] mlazos/opt-bench-exp2 -> origin/mlazos/opt-bench-exp2 2025-08-14T21:24:06.1825098Z * [new branch] mlazos/opt-incr -> origin/mlazos/opt-incr 2025-08-14T21:24:06.1825296Z * [new branch] mlazos/proxy-ctors -> origin/mlazos/proxy-ctors 2025-08-14T21:24:06.1825436Z * [new branch] mlazos/proxy-opt -> origin/mlazos/proxy-opt 2025-08-14T21:24:06.1825591Z * [new branch] mlazos/quant-fix -> origin/mlazos/quant-fix 2025-08-14T21:24:06.1825735Z * [new branch] mlazos/rm-buf-names -> origin/mlazos/rm-buf-names 2025-08-14T21:24:06.1825863Z * [new branch] mlazos/rm-spam -> origin/mlazos/rm-spam 2025-08-14T21:24:06.1825995Z * [new branch] mlazos/rtp -> origin/mlazos/rtp 2025-08-14T21:24:06.1826144Z * [new branch] mlazos/static-idx-dbg -> origin/mlazos/static-idx-dbg 2025-08-14T21:24:06.1826313Z * [new branch] mlazos/static-inputs-log -> origin/mlazos/static-inputs-log 2025-08-14T21:24:06.1826465Z * [new branch] mlazos/sub-param-fix -> origin/mlazos/sub-param-fix 2025-08-14T21:24:06.1826596Z * [new branch] mlazos/td-fix2 -> origin/mlazos/td-fix2 2025-08-14T21:24:06.1826763Z * [new branch] mlazos/tensor-hasattr2 -> origin/mlazos/tensor-hasattr2 2025-08-14T21:24:06.1831300Z * [new branch] mlazos/test -> origin/mlazos/test 2025-08-14T21:24:06.1831563Z * [new branch] mlazos/tf-mode -> origin/mlazos/tf-mode 2025-08-14T21:24:06.1831831Z * [new branch] mlazos/tf-mode-backup2 -> origin/mlazos/tf-mode-backup2 2025-08-14T21:24:06.1832034Z * [new branch] mlazos/tf-mode-reland -> origin/mlazos/tf-mode-reland 2025-08-14T21:24:06.1832233Z * [new branch] mlazos/tf-mode-reland2 -> origin/mlazos/tf-mode-reland2 2025-08-14T21:24:06.1832492Z * [new branch] mlazos/tf-mode-reland3 -> origin/mlazos/tf-mode-reland3 2025-08-14T21:24:06.1832646Z * [new branch] mlazos/topo-fix -> origin/mlazos/topo-fix 2025-08-14T21:24:06.1832953Z * [new branch] mlazos/triton-no-epi -> origin/mlazos/triton-no-epi 2025-08-14T21:24:06.1833110Z * [new branch] mlazos/tune-proto -> origin/mlazos/tune-proto 2025-08-14T21:24:06.1833264Z * [new branch] mlazos/tuple-fixes -> origin/mlazos/tuple-fixes 2025-08-14T21:24:06.1833414Z * [new branch] mlazos/tuple-fixes2 -> origin/mlazos/tuple-fixes2 2025-08-14T21:24:06.1833572Z * [new branch] mlazos/tuple-handling -> origin/mlazos/tuple-handling 2025-08-14T21:24:06.1833717Z * [new branch] mlazos/user-streams -> origin/mlazos/user-streams 2025-08-14T21:24:06.1838715Z * [new branch] mlazos/vary-beta -> origin/mlazos/vary-beta 2025-08-14T21:24:06.1839057Z * [new branch] mlazos/vary-beta2 -> origin/mlazos/vary-beta2 2025-08-14T21:24:06.1839219Z * [new branch] mlazos/weird-perf1 -> origin/mlazos/weird-perf1 2025-08-14T21:24:06.1839530Z * [new branch] mm_out_dtype_compile -> origin/mm_out_dtype_compile 2025-08-14T21:24:06.1839830Z * [new branch] modify-setupvllm -> origin/modify-setupvllm 2025-08-14T21:24:06.1840023Z * [new branch] move-theme-out-docker -> origin/move-theme-out-docker 2025-08-14T21:24:06.1844762Z * [new branch] mps-linear-1d -> origin/mps-linear-1d 2025-08-14T21:24:06.1849015Z * [new branch] msaroufim/be1 -> origin/msaroufim/be1 2025-08-14T21:24:06.1854052Z * [new branch] msaroufim/cn_path -> origin/msaroufim/cn_path 2025-08-14T21:24:06.1854291Z * [new branch] msaroufim/dtensorfusedadam -> origin/msaroufim/dtensorfusedadam 2025-08-14T21:24:06.1854452Z * [new branch] msaroufim/reduce -> origin/msaroufim/reduce 2025-08-14T21:24:06.1854601Z * [new branch] mtia/basic-cmake -> origin/mtia/basic-cmake 2025-08-14T21:24:06.1854999Z * [new branch] muon_dev -> origin/muon_dev 2025-08-14T21:24:06.1855180Z * [new branch] new-modifiy-setupvllm -> origin/new-modifiy-setupvllm 2025-08-14T21:24:06.1855320Z * [new branch] new-setupvllm -> origin/new-setupvllm 2025-08-14T21:24:06.1855465Z * [new branch] newtest-base -> origin/newtest-base 2025-08-14T21:24:06.1855608Z * [new branch] ngimel/cat_perf -> origin/ngimel/cat_perf 2025-08-14T21:24:06.1855774Z * [new branch] ngimel/cudamoduleload -> origin/ngimel/cudamoduleload 2025-08-14T21:24:06.1855964Z * [new branch] ngimel/fabric_driver_version -> origin/ngimel/fabric_driver_version 2025-08-14T21:24:06.1856112Z * [new branch] ngimel/fabric_symm -> origin/ngimel/fabric_symm 2025-08-14T21:24:06.1856251Z * [new branch] ngimel/gg_new -> origin/ngimel/gg_new 2025-08-14T21:24:06.1856424Z * [new branch] ngimel/grouped_mm_checks -> origin/ngimel/grouped_mm_checks 2025-08-14T21:24:06.1856573Z * [new branch] ngimel/guardfabric -> origin/ngimel/guardfabric 2025-08-14T21:24:06.1856725Z * [new branch] ngimel/index_None -> origin/ngimel/index_None 2025-08-14T21:24:06.1856867Z * [new branch] ngimel/modeguard -> origin/ngimel/modeguard 2025-08-14T21:24:06.1857042Z * [new branch] ngimel/multicast_fix -> origin/ngimel/multicast_fix 2025-08-14T21:24:06.1857197Z * [new branch] ngimel/unbind_multimem -> origin/ngimel/unbind_multimem 2025-08-14T21:24:06.1857325Z * [new branch] nightly -> origin/nightly 2025-08-14T21:24:06.1857498Z * [new branch] nmacchioni-patch-10 -> origin/nmacchioni-patch-10 2025-08-14T21:24:06.1857654Z * [new branch] nmacchioni-patch-7 -> origin/nmacchioni-patch-7 2025-08-14T21:24:06.1857820Z * [new branch] nmacchioni-patch-8 -> origin/nmacchioni-patch-8 2025-08-14T21:24:06.1857966Z * [new branch] nmacchioni-patch-9 -> origin/nmacchioni-patch-9 2025-08-14T21:24:06.1858310Z * [new branch] nullplay_fuse_matmul -> origin/nullplay_fuse_matmul 2025-08-14T21:24:06.1858595Z * [new branch] nweidia/enable-B200-inductor-nightly-ci -> origin/nweidia/enable-B200-inductor-nightly-ci 2025-08-14T21:24:06.1859101Z * [new branch] one-off -> origin/one-off 2025-08-14T21:24:06.1866715Z * [new branch] orig/release/1.10 -> origin/orig/release/1.10 2025-08-14T21:24:06.1867062Z * [new branch] orig/release/1.11 -> origin/orig/release/1.11 2025-08-14T21:24:06.1867232Z * [new branch] orig/release/1.12 -> origin/orig/release/1.12 2025-08-14T21:24:06.1867382Z * [new branch] orig/release/1.13 -> origin/orig/release/1.13 2025-08-14T21:24:06.1867750Z * [new branch] orig/release/1.6 -> origin/orig/release/1.6 2025-08-14T21:24:06.1868048Z * [new branch] orig/release/1.7 -> origin/orig/release/1.7 2025-08-14T21:24:06.1868937Z * [new branch] orig/release/1.8 -> origin/orig/release/1.8 2025-08-14T21:24:06.1869104Z * [new branch] orig/release/1.9 -> origin/orig/release/1.9 2025-08-14T21:24:06.1869355Z * [new branch] orig/release/2.0 -> origin/orig/release/2.0 2025-08-14T21:24:06.1869517Z * [new branch] orig/release/2.1 -> origin/orig/release/2.1 2025-08-14T21:24:06.1869761Z * [new branch] orig/release/2.2 -> origin/orig/release/2.2 2025-08-14T21:24:06.1869931Z * [new branch] orig/release/2.3 -> origin/orig/release/2.3 2025-08-14T21:24:06.1870163Z * [new branch] orig/release/2.4 -> origin/orig/release/2.4 2025-08-14T21:24:06.1876116Z * [new branch] orig/release/2.5 -> origin/orig/release/2.5 2025-08-14T21:24:06.1876454Z * [new branch] orig/release/2.6 -> origin/orig/release/2.6 2025-08-14T21:24:06.1876612Z * [new branch] orig/release/2.7 -> origin/orig/release/2.7 2025-08-14T21:24:06.1876856Z * [new branch] orig/release/2.8 -> origin/orig/release/2.8 2025-08-14T21:24:06.1877033Z * [new branch] oulgen/fx_graph -> origin/oulgen/fx_graph 2025-08-14T21:24:06.1877195Z * [new branch] padded-tensor -> origin/padded-tensor 2025-08-14T21:24:06.1878082Z * [new branch] parallel_cat -> origin/parallel_cat 2025-08-14T21:24:06.1878233Z * [new branch] pca2 -> origin/pca2 2025-08-14T21:24:06.1878407Z * [new branch] pianpwk-patch-1 -> origin/pianpwk-patch-1 2025-08-14T21:24:06.1878657Z * [new branch] pianpwk/backed_size_oblivious_export -> origin/pianpwk/backed_size_oblivious_export 2025-08-14T21:24:06.1878836Z * [new branch] pianpwk/dde_repeat_cat -> origin/pianpwk/dde_repeat_cat 2025-08-14T21:24:06.1884472Z * [new branch] pianpwk/draft_export_normalize -> origin/pianpwk/draft_export_normalize 2025-08-14T21:24:06.1884689Z * [new branch] pianpwk/dynamic_source_dim -> origin/pianpwk/dynamic_source_dim 2025-08-14T21:24:06.1884886Z * [new branch] pianpwk/invalidate_fake_memo -> origin/pianpwk/invalidate_fake_memo 2025-08-14T21:24:06.1885065Z * [new branch] pianpwk/lru_cache_bound_sympy -> origin/pianpwk/lru_cache_bound_sympy 2025-08-14T21:24:06.1885231Z * [new branch] pianpwk/max_1_strides -> origin/pianpwk/max_1_strides 2025-08-14T21:24:06.1885387Z * [new branch] pianpwk/nonzero_memo -> origin/pianpwk/nonzero_memo 2025-08-14T21:24:06.1890730Z * [new branch] pianpwk/oblivious_reshape_view_better -> origin/pianpwk/oblivious_reshape_view_better 2025-08-14T21:24:06.1891119Z * [new branch] pianpwk/oblivious_should_swap -> origin/pianpwk/oblivious_should_swap 2025-08-14T21:24:06.1891398Z * [new branch] pianpwk/oblivious_slice_forward -> origin/pianpwk/oblivious_slice_forward 2025-08-14T21:24:06.1891681Z * [new branch] pianpwk/oblivious_where -> origin/pianpwk/oblivious_where 2025-08-14T21:24:06.1892392Z * [new branch] pianpwk/param_static_pgo -> origin/pianpwk/param_static_pgo 2025-08-14T21:24:06.1892620Z * [new branch] pianpwk/pre_forward_hook -> origin/pianpwk/pre_forward_hook 2025-08-14T21:24:06.1892830Z * [new branch] pianpwk/remove_guard_fail_break -> origin/pianpwk/remove_guard_fail_break 2025-08-14T21:24:06.1893020Z * [new branch] pianpwk/slice_fresh_symbols -> origin/pianpwk/slice_fresh_symbols 2025-08-14T21:24:06.1893370Z * [new branch] pianpwk/sym_sym -> origin/pianpwk/sym_sym 2025-08-14T21:24:06.1893568Z * [new branch] pianpwk/test_slice_fake_impl -> origin/pianpwk/test_slice_fake_impl 2025-08-14T21:24:06.1893776Z * [new branch] pianpwk/unbacked_channels_last -> origin/pianpwk/unbacked_channels_last 2025-08-14T21:24:06.1893958Z * [new branch] pianpwk/unbacked_safe_conv1d -> origin/pianpwk/unbacked_safe_conv1d 2025-08-14T21:24:06.1894131Z * [new branch] pianpwk/unbacked_sdpa_flash -> origin/pianpwk/unbacked_sdpa_flash 2025-08-14T21:24:06.1894312Z * [new branch] pianpwk/unbacked_should_swap -> origin/pianpwk/unbacked_should_swap 2025-08-14T21:24:06.1894991Z * [new branch] pianpwk/unbacked_should_swap_2 -> origin/pianpwk/unbacked_should_swap_2 2025-08-14T21:24:06.1895192Z * [new branch] pianpwk/unbacked_slice_binding -> origin/pianpwk/unbacked_slice_binding 2025-08-14T21:24:06.1895438Z * [new branch] pianpwk/unbacked_slice_forward -> origin/pianpwk/unbacked_slice_forward 2025-08-14T21:24:06.1895620Z * [new branch] pianpwk/verbose_tensor_guards -> origin/pianpwk/verbose_tensor_guards 2025-08-14T21:24:06.1895786Z * [new branch] pianpwk/wan21_reshape -> origin/pianpwk/wan21_reshape 2025-08-14T21:24:06.1895980Z * [new branch] pianpwk/whitelist_optimizer -> origin/pianpwk/whitelist_optimizer 2025-08-14T21:24:06.1896643Z * [new branch] pin-torchao -> origin/pin-torchao 2025-08-14T21:24:06.1898010Z * [new branch] piz/fall_back_missing_0705 -> origin/piz/fall_back_missing_0705 2025-08-14T21:24:06.1898307Z * [new branch] piz/fall_back_missing_0716 -> origin/piz/fall_back_missing_0716 2025-08-14T21:24:06.1899341Z * [new branch] piz/fill_dist_cost_0702-3 -> origin/piz/fill_dist_cost_0702-3 2025-08-14T21:24:06.1899626Z * [new branch] piz/fill_dist_cost_0702-4 -> origin/piz/fill_dist_cost_0702-4 2025-08-14T21:24:06.1901108Z * [new branch] piz/fill_dist_cost_0702-5 -> origin/piz/fill_dist_cost_0702-5 2025-08-14T21:24:06.1901395Z * [new branch] piz/fix_sort_ -> origin/piz/fix_sort_ 2025-08-14T21:24:06.1903455Z * [new branch] piz/improve_scatter_0808 -> origin/piz/improve_scatter_0808 2025-08-14T21:24:06.1903806Z * [new branch] pool-separate -> origin/pool-separate 2025-08-14T21:24:06.1903986Z * [new branch] pr-156087 -> origin/pr-156087 2025-08-14T21:24:06.1904918Z * [new branch] pr/131860 -> origin/pr/131860 2025-08-14T21:24:06.1905450Z * [new branch] predispatch_to -> origin/predispatch_to 2025-08-14T21:24:06.1906783Z * [new branch] pt-opt-cuda3 -> origin/pt-opt-cuda3 2025-08-14T21:24:06.1907109Z * [new branch] pt2e-cache-model-device -> origin/pt2e-cache-model-device 2025-08-14T21:24:06.1907667Z * [new branch] pull-latest-theme -> origin/pull-latest-theme 2025-08-14T21:24:06.1909427Z * [new branch] pyobjectslot -> origin/pyobjectslot 2025-08-14T21:24:06.1909657Z * [new branch] python_compiled_autograd -> origin/python_compiled_autograd 2025-08-14T21:24:06.1911865Z * [new branch] qchip/export-D54134695 -> origin/qchip/export-D54134695 2025-08-14T21:24:06.1912186Z * [new branch] quint-bits -> origin/quint-bits 2025-08-14T21:24:06.1913816Z * [new branch] release/1.10 -> origin/release/1.10 2025-08-14T21:24:06.1919257Z * [new branch] release/1.11 -> origin/release/1.11 2025-08-14T21:24:06.1919520Z * [new branch] release/1.12 -> origin/release/1.12 2025-08-14T21:24:06.1919739Z * [new branch] release/1.13 -> origin/release/1.13 2025-08-14T21:24:06.1920049Z * [new branch] release/1.4 -> origin/release/1.4 2025-08-14T21:24:06.1920276Z * [new branch] release/1.4.1 -> origin/release/1.4.1 2025-08-14T21:24:06.1920533Z * [new branch] release/1.5 -> origin/release/1.5 2025-08-14T21:24:06.1920663Z * [new branch] release/1.6 -> origin/release/1.6 2025-08-14T21:24:06.1920849Z * [new branch] release/1.7 -> origin/release/1.7 2025-08-14T21:24:06.1927343Z * [new branch] release/1.8 -> origin/release/1.8 2025-08-14T21:24:06.1931636Z * [new branch] release/1.9 -> origin/release/1.9 2025-08-14T21:24:06.1933734Z * [new branch] release/2.0 -> origin/release/2.0 2025-08-14T21:24:06.1933911Z * [new branch] release/2.1 -> origin/release/2.1 2025-08-14T21:24:06.1934213Z * [new branch] release/2.2 -> origin/release/2.2 2025-08-14T21:24:06.1934353Z * [new branch] release/2.3 -> origin/release/2.3 2025-08-14T21:24:06.1934483Z * [new branch] release/2.4 -> origin/release/2.4 2025-08-14T21:24:06.1934603Z * [new branch] release/2.5 -> origin/release/2.5 2025-08-14T21:24:06.1934737Z * [new branch] release/2.6 -> origin/release/2.6 2025-08-14T21:24:06.1934863Z * [new branch] release/2.7 -> origin/release/2.7 2025-08-14T21:24:06.1934994Z * [new branch] release/2.8 -> origin/release/2.8 2025-08-14T21:24:06.1935145Z * [new branch] release_notes -> origin/release_notes 2025-08-14T21:24:06.1935336Z * [new branch] remove-actionable-label -> origin/remove-actionable-label 2025-08-14T21:24:06.1935475Z * [new branch] remove-ao -> origin/remove-ao 2025-08-14T21:24:06.1935723Z * [new branch] replace-pytorch-labs-20250812-195836 -> origin/replace-pytorch-labs-20250812-195836 2025-08-14T21:24:06.1935956Z * [new branch] replace-pytorch-labs-20250812-200248 -> origin/replace-pytorch-labs-20250812-200248 2025-08-14T21:24:06.1936174Z * [new branch] replace-pytorch-labs-20250812-200324 -> origin/replace-pytorch-labs-20250812-200324 2025-08-14T21:24:06.1936391Z * [new branch] replace-pytorch-labs-20250812-204020 -> origin/replace-pytorch-labs-20250812-204020 2025-08-14T21:24:06.1936615Z * [new branch] replace-pytorch-labs-20250812-204125 -> origin/replace-pytorch-labs-20250812-204125 2025-08-14T21:24:06.1936834Z * [new branch] replace-pytorch-labs-20250812-205624 -> origin/replace-pytorch-labs-20250812-205624 2025-08-14T21:24:06.1937105Z * [new branch] revert-131069-gh/krzysztofjordan/1/head -> origin/revert-131069-gh/krzysztofjordan/1/head 2025-08-14T21:24:06.1939316Z * [new branch] revert-131469-gh/andrewor14/51/head -> origin/revert-131469-gh/andrewor14/51/head 2025-08-14T21:24:06.1940265Z * [new branch] revert-156870-gh/skarjala/3/head -> origin/revert-156870-gh/skarjala/3/head 2025-08-14T21:24:06.1943698Z * [new branch] revert-157914-cherry-pick-157503-by-pytorch_bot_bot_ -> origin/revert-157914-cherry-pick-157503-by-pytorch_bot_bot_ 2025-08-14T21:24:06.1943864Z * [new branch] revert-direct-updates -> origin/revert-direct-updates 2025-08-14T21:24:06.1944014Z * [new branch] rocm-monitoring -> origin/rocm-monitoring 2025-08-14T21:24:06.1947861Z * [new branch] ryanguo99/cleanup-dynamo-expected-failures -> origin/ryanguo99/cleanup-dynamo-expected-failures 2025-08-14T21:24:06.1948105Z * [new branch] ryanguo99/fix-closure-var -> origin/ryanguo99/fix-closure-var 2025-08-14T21:24:06.1948273Z * [new branch] rzou/faketensor_bench -> origin/rzou/faketensor_bench 2025-08-14T21:24:06.1948602Z * [new branch] rzou/njt -> origin/rzou/njt 2025-08-14T21:24:06.1948811Z * [new branch] rzou/operator -> origin/rzou/operator 2025-08-14T21:24:06.1949063Z * [new branch] rzou/pca -> origin/rzou/pca 2025-08-14T21:24:06.1949214Z * [new branch] rzou/pipe_split -> origin/rzou/pipe_split 2025-08-14T21:24:06.1953047Z * [new branch] rzou/realprop -> origin/rzou/realprop 2025-08-14T21:24:06.1953246Z * [new branch] rzou/setup_context -> origin/rzou/setup_context 2025-08-14T21:24:06.1953470Z * [new branch] sanchitintel/refactor_aten_int8_woq_gemm -> origin/sanchitintel/refactor_aten_int8_woq_gemm 2025-08-14T21:24:06.1953829Z * [new branch] sanchitintel/weird_thing_with_test_cpu_select_algorithm -> origin/sanchitintel/weird_thing_with_test_cpu_select_algorithm 2025-08-14T21:24:06.1954137Z * [new branch] sapling-pr-archive-SS-JIA -> origin/sapling-pr-archive-SS-JIA 2025-08-14T21:24:06.1957318Z * [new branch] save -> origin/save 2025-08-14T21:24:06.1957445Z * [new branch] sdym/2.5.1 -> origin/sdym/2.5.1 2025-08-14T21:24:06.1957591Z * [new branch] seemethere-patch-1 -> origin/seemethere-patch-1 2025-08-14T21:24:06.1957722Z * [new branch] setup-torchci -> origin/setup-torchci 2025-08-14T21:24:06.1957838Z * [new branch] setupvllm -> origin/setupvllm 2025-08-14T21:24:06.1962354Z * [new branch] share_and_pin_fork -> origin/share_and_pin_fork 2025-08-14T21:24:06.1962569Z * [new branch] shengf/fx-xform-perf -> origin/shengf/fx-xform-perf 2025-08-14T21:24:06.1962733Z * [new branch] shikaili_fp8_allgather -> origin/shikaili_fp8_allgather 2025-08-14T21:24:06.1962964Z * [new branch] shoumikhin-patch-12 -> origin/shoumikhin-patch-12 2025-08-14T21:24:06.1963144Z * [new branch] simplify-fq-per-channel -> origin/simplify-fq-per-channel 2025-08-14T21:24:06.1963295Z * [new branch] solve-accuracy-fix -> origin/solve-accuracy-fix 2025-08-14T21:24:06.1963444Z * [new branch] sqzhang/flight4 -> origin/sqzhang/flight4 2025-08-14T21:24:06.1963598Z * [new branch] sqzhang/flight4plus -> origin/sqzhang/flight4plus 2025-08-14T21:24:06.1963767Z * [new branch] sraikund/record_funct_test -> origin/sraikund/record_funct_test 2025-08-14T21:24:06.1964231Z * [new branch] sraikund16/test -> origin/sraikund16/test 2025-08-14T21:24:06.1964864Z * [new branch] stablize-compilation-time -> origin/stablize-compilation-time 2025-08-14T21:24:06.1968317Z * [new branch] standalone-templates -> origin/standalone-templates 2025-08-14T21:24:06.1968646Z * [new branch] standalone_package_weights -> origin/standalone_package_weights 2025-08-14T21:24:06.1968896Z * [new branch] starterTaskUpdate -> origin/starterTaskUpdate 2025-08-14T21:24:06.1969107Z * [new branch] step2vllmsetup -> origin/step2vllmsetup 2025-08-14T21:24:06.1969316Z * [new branch] subgraph_fuse -> origin/subgraph_fuse 2025-08-14T21:24:06.1969972Z * [new branch] support-uv-in-collect_env -> origin/support-uv-in-collect_env 2025-08-14T21:24:06.1971896Z * [new branch] suryasub/fix-nccl-hang -> origin/suryasub/fix-nccl-hang 2025-08-14T21:24:06.1972217Z * [new branch] sve-poc -> origin/sve-poc 2025-08-14T21:24:06.1972431Z * [new branch] svekars-patch-1 -> origin/svekars-patch-1 2025-08-14T21:24:06.1972666Z * [new branch] svekars-patch-2 -> origin/svekars-patch-2 2025-08-14T21:24:06.1973066Z * [new branch] switch-bn -> origin/switch-bn 2025-08-14T21:24:06.1976181Z * [new branch] sympy-bottleneck-repro -> origin/sympy-bottleneck-repro 2025-08-14T21:24:06.1976439Z * [new branch] tenpercent/ck_inductor_gfx950 -> origin/tenpercent/ck_inductor_gfx950 2025-08-14T21:24:06.1976603Z * [new branch] tensordict_integration -> origin/tensordict_integration 2025-08-14T21:24:06.1976818Z * [new branch] test-half-migration-internally -> origin/test-half-migration-internally 2025-08-14T21:24:06.1976967Z * [new branch] test-internal-et -> origin/test-internal-et 2025-08-14T21:24:06.1977136Z * [new branch] test-move-conda-builds -> origin/test-move-conda-builds 2025-08-14T21:24:06.1977344Z * [new branch] test-myst-markdown-docstring -> origin/test-myst-markdown-docstring 2025-08-14T21:24:06.1977654Z * [new branch] test-old -> origin/test-old 2025-08-14T21:24:06.1978047Z * [new branch] test-vec-migration-internally -> origin/test-vec-migration-internally 2025-08-14T21:24:06.1979327Z * [new branch] test/bmm_heur -> origin/test/bmm_heur 2025-08-14T21:24:06.1979879Z * [new branch] test/inductor -> origin/test/inductor 2025-08-14T21:24:06.1983546Z * [new branch] tidy_performance_cyy -> origin/tidy_performance_cyy 2025-08-14T21:24:06.1983679Z * [new branch] torchtitan_ep -> origin/torchtitan_ep 2025-08-14T21:24:06.1983924Z * [new branch] trace_fsdp_torchtune_lora -> origin/trace_fsdp_torchtune_lora 2025-08-14T21:24:06.1989287Z * [new branch] traceable_fsdp_unit_tests -> origin/traceable_fsdp_unit_tests 2025-08-14T21:24:06.1993425Z * [new branch] trackMonitor -> origin/trackMonitor 2025-08-14T21:24:06.1997335Z * [new branch] tree_loop_vec_base -> origin/tree_loop_vec_base 2025-08-14T21:24:06.2000582Z * [new branch] tree_vec_base -> origin/tree_vec_base 2025-08-14T21:24:06.2003960Z * [new branch] triton-update -> origin/triton-update 2025-08-14T21:24:06.2008925Z * [new branch] triton_kernel -> origin/triton_kernel 2025-08-14T21:24:06.2009224Z * [new branch] triton_kernel_perf -> origin/triton_kernel_perf 2025-08-14T21:24:06.2009451Z * [new branch] try-runllm -> origin/try-runllm 2025-08-14T21:24:06.2009600Z * [new branch] type_dec -> origin/type_dec 2025-08-14T21:24:06.2009872Z * [new branch] udate-sphinx-dependancies -> origin/udate-sphinx-dependancies 2025-08-14T21:24:06.2010155Z * [new branch] update-audio-commit-hash/16307312222-1661-1 -> origin/update-audio-commit-hash/16307312222-1661-1 2025-08-14T21:24:06.2010898Z * [new branch] update-audio-commit-hash/16431348808-1673-1 -> origin/update-audio-commit-hash/16431348808-1673-1 2025-08-14T21:24:06.2013723Z * [new branch] update-audio-commit-hash/16510774365-1683-1 -> origin/update-audio-commit-hash/16510774365-1683-1 2025-08-14T21:24:06.2014145Z * [new branch] update-audio-commit-hash/16583472358-1693-1 -> origin/update-audio-commit-hash/16583472358-1693-1 2025-08-14T21:24:06.2014514Z * [new branch] update-audio-commit-hash/16663082088-1700-1 -> origin/update-audio-commit-hash/16663082088-1700-1 2025-08-14T21:24:06.2015228Z * [new branch] update-audio-commit-hash/16737365217-1704-1 -> origin/update-audio-commit-hash/16737365217-1704-1 2025-08-14T21:24:06.2019988Z * [new branch] update-audio-commit-hash/16791960928-1711-1 -> origin/update-audio-commit-hash/16791960928-1711-1 2025-08-14T21:24:06.2024703Z * [new branch] update-audio-commit-hash/16818882925-1712-1 -> origin/update-audio-commit-hash/16818882925-1712-1 2025-08-14T21:24:06.2025228Z * [new branch] update-audio-commit-hash/16895560422-1720-1 -> origin/update-audio-commit-hash/16895560422-1720-1 2025-08-14T21:24:06.2025481Z * [new branch] update-audio-commit-hash/16924174496-1738-1 -> origin/update-audio-commit-hash/16924174496-1738-1 2025-08-14T21:24:06.2025835Z * [new branch] update-dynamic-shapes-doc -> origin/update-dynamic-shapes-doc 2025-08-14T21:24:06.2026134Z * [new branch] update-executorch-commit-hash/15694981040-1626-1 -> origin/update-executorch-commit-hash/15694981040-1626-1 2025-08-14T21:24:06.2026413Z * [new branch] update-triton-commit-hash/13663274526-1487-2 -> origin/update-triton-commit-hash/13663274526-1487-2 2025-08-14T21:24:06.2026666Z * [new branch] update-vision-commit-hash/15336342773-1607-1 -> origin/update-vision-commit-hash/15336342773-1607-1 2025-08-14T21:24:06.2026973Z * [new branch] update-vllm-commit-hash/16431348808-1673-1 -> origin/update-vllm-commit-hash/16431348808-1673-1 2025-08-14T21:24:06.2027206Z * [new branch] update-vllm-commit-hash/16484773233-1682-1 -> origin/update-vllm-commit-hash/16484773233-1682-1 2025-08-14T21:24:06.2027425Z * [new branch] update-vllm-commit-hash/16510774365-1683-1 -> origin/update-vllm-commit-hash/16510774365-1683-1 2025-08-14T21:24:06.2027680Z * [new branch] update-vllm-commit-hash/16534031105-1684-1 -> origin/update-vllm-commit-hash/16534031105-1684-1 2025-08-14T21:24:06.2027910Z * [new branch] update-vllm-commit-hash/16545403308-1687-1 -> origin/update-vllm-commit-hash/16545403308-1687-1 2025-08-14T21:24:06.2028136Z * [new branch] update-vllm-commit-hash/16557202787-1688-1 -> origin/update-vllm-commit-hash/16557202787-1688-1 2025-08-14T21:24:06.2028367Z * [new branch] update-vllm-commit-hash/16583472358-1693-1 -> origin/update-vllm-commit-hash/16583472358-1693-1 2025-08-14T21:24:06.2028600Z * [new branch] update-vllm-commit-hash/16663082088-1700-1 -> origin/update-vllm-commit-hash/16663082088-1700-1 2025-08-14T21:24:06.2028832Z * [new branch] update-vllm-commit-hash/16737365217-1704-1 -> origin/update-vllm-commit-hash/16737365217-1704-1 2025-08-14T21:24:06.2029050Z * [new branch] update-vllm-commit-hash/16843157111-1713-1 -> origin/update-vllm-commit-hash/16843157111-1713-1 2025-08-14T21:24:06.2029267Z * [new branch] update-vllm-commit-hash/16855312394-1714-1 -> origin/update-vllm-commit-hash/16855312394-1714-1 2025-08-14T21:24:06.2029475Z * [new branch] update-vllm-commit-hash/16924174496-1738-1 -> origin/update-vllm-commit-hash/16924174496-1738-1 2025-08-14T21:24:06.2029690Z * [new branch] update-vllm-commit-hash/16952608705-1745-1 -> origin/update-vllm-commit-hash/16952608705-1745-1 2025-08-14T21:24:06.2029899Z * [new branch] update-xla-commit-hash/16260974441-194-1 -> origin/update-xla-commit-hash/16260974441-194-1 2025-08-14T21:24:06.2030109Z * [new branch] update-xla-commit-hash/16717126778-197-1 -> origin/update-xla-commit-hash/16717126778-197-1 2025-08-14T21:24:06.2030318Z * [new branch] update-xla-commit-hash/16873912760-198-1 -> origin/update-xla-commit-hash/16873912760-198-1 2025-08-14T21:24:06.2030527Z * [new branch] update_docs_torch_multinomial_issue#125388 -> origin/update_docs_torch_multinomial_issue#125388 2025-08-14T21:24:06.2030686Z * [new branch] update_executorch_pin -> origin/update_executorch_pin 2025-08-14T21:24:06.2030841Z * [new branch] update_slow_tests_1722488736 -> origin/update_slow_tests_1722488736 2025-08-14T21:24:06.2030991Z * [new branch] update_slow_tests_1722879173 -> origin/update_slow_tests_1722879173 2025-08-14T21:24:06.2031149Z * [new branch] update_slow_tests_1752478971 -> origin/update_slow_tests_1752478971 2025-08-14T21:24:06.2031378Z * [new branch] update_submodule_FBGEMM -> origin/update_submodule_FBGEMM 2025-08-14T21:24:06.2031533Z * [new branch] update_submodule_kineto -> origin/update_submodule_kineto 2025-08-14T21:24:06.2031699Z * [new branch] update_submodule_tensorpipe -> origin/update_submodule_tensorpipe 2025-08-14T21:24:06.2031815Z * [new branch] v0.1.2 -> origin/v0.1.2 2025-08-14T21:24:06.2031943Z * [new branch] v1.0.1 -> origin/v1.0.1 2025-08-14T21:24:06.2032048Z * [new branch] v1.0.3 -> origin/v1.0.3 2025-08-14T21:24:06.2032159Z * [new branch] v1.1.0 -> origin/v1.1.0 2025-08-14T21:24:06.2032263Z * [new branch] v1.2.0 -> origin/v1.2.0 2025-08-14T21:24:06.2032364Z * [new branch] v1.3.0 -> origin/v1.3.0 2025-08-14T21:24:06.2032519Z * [new branch] v1.3.1 -> origin/v1.3.1 2025-08-14T21:24:06.2032646Z * [new branch] validate_fn -> origin/validate_fn 2025-08-14T21:24:06.2032779Z * [new branch] validations_2.6 -> origin/validations_2.6 2025-08-14T21:24:06.2032921Z * [new branch] validations_2.8 -> origin/validations_2.8 2025-08-14T21:24:06.2037512Z * [new branch] viable/strict -> origin/viable/strict 2025-08-14T21:24:06.2037853Z * [new branch] vllmbuildci -> origin/vllmbuildci 2025-08-14T21:24:06.2037976Z * [new branch] vllmpin -> origin/vllmpin 2025-08-14T21:24:06.2038251Z * [new branch] vllmpintest -> origin/vllmpintest 2025-08-14T21:24:06.2038406Z * [new branch] wdvr-patch-1 -> origin/wdvr-patch-1 2025-08-14T21:24:06.2038612Z * [new branch] wdvr-patch-2 -> origin/wdvr-patch-2 2025-08-14T21:24:06.2038955Z * [new branch] wdvr/conda_devcontainer -> origin/wdvr/conda_devcontainer 2025-08-14T21:24:06.2039106Z * [new branch] wdvr/fix_logging_test -> origin/wdvr/fix_logging_test 2025-08-14T21:24:06.2039245Z * [new branch] wdvr/iss_145259 -> origin/wdvr/iss_145259 2025-08-14T21:24:06.2039694Z * [new branch] weight_sharing_cpp -> origin/weight_sharing_cpp 2025-08-14T21:24:06.2039853Z * [new branch] whc/flight -> origin/whc/flight 2025-08-14T21:24:06.2039981Z * [new branch] whc/flight4 -> origin/whc/flight4 2025-08-14T21:24:06.2040120Z * [new branch] whc/flight51 -> origin/whc/flight51 2025-08-14T21:24:06.2040248Z * [new branch] whc/flight53 -> origin/whc/flight53 2025-08-14T21:24:06.2040382Z * [new branch] whc/p2phang -> origin/whc/p2phang 2025-08-14T21:24:06.2045500Z * [new branch] whc/stage2 -> origin/whc/stage2 2025-08-14T21:24:06.2047864Z * [new branch] whc/uneven -> origin/whc/uneven 2025-08-14T21:24:06.2052920Z * [new branch] whc/uneven-merge -> origin/whc/uneven-merge 2025-08-14T21:24:06.2054993Z * [new branch] win_warnings -> origin/win_warnings 2025-08-14T21:24:06.2055183Z * [new branch] workonoldcommit -> origin/workonoldcommit 2025-08-14T21:24:06.2055397Z * [new branch] wwen/programming-model-2.8 -> origin/wwen/programming-model-2.8 2025-08-14T21:24:06.2055525Z * [new branch] xmfan/ca_0516 -> origin/xmfan/ca_0516 2025-08-14T21:24:06.2055667Z * [new branch] xmfan/ca_1051b93192 -> origin/xmfan/ca_1051b93192 2025-08-14T21:24:06.2055923Z * [new branch] xmfan/ca_1a722f62c248391fc4a542e8851a5559aa356ae8 -> origin/xmfan/ca_1a722f62c248391fc4a542e8851a5559aa356ae8 2025-08-14T21:24:06.2056288Z * [new branch] xmfan/ca_5a2be192d1 -> origin/xmfan/ca_5a2be192d1 2025-08-14T21:24:06.2056438Z * [new branch] xmfan/ca_9d59b516e9 -> origin/xmfan/ca_9d59b516e9 2025-08-14T21:24:06.2056565Z * [new branch] xmfan/ca_api -> origin/xmfan/ca_api 2025-08-14T21:24:06.2056685Z * [new branch] xmfan/ca_apr8 -> origin/xmfan/ca_apr8 2025-08-14T21:24:06.2056810Z * [new branch] xmfan/ca_base -> origin/xmfan/ca_base 2025-08-14T21:24:06.2056945Z * [new branch] xmfan/ca_cudagraphs -> origin/xmfan/ca_cudagraphs 2025-08-14T21:24:06.2057081Z * [new branch] xmfan/ca_dynamic -> origin/xmfan/ca_dynamic 2025-08-14T21:24:06.2057204Z * [new branch] xmfan/ca_fix_dyn -> origin/xmfan/ca_fix_dyn 2025-08-14T21:24:06.2057342Z * [new branch] xmfan/ca_fix_lowering -> origin/xmfan/ca_fix_lowering 2025-08-14T21:24:06.2057559Z * [new branch] xmfan/ca_fix_polyfills -> origin/xmfan/ca_fix_polyfills 2025-08-14T21:24:06.2057682Z * [new branch] xmfan/ca_jan3 -> origin/xmfan/ca_jan3 2025-08-14T21:24:06.2057816Z * [new branch] xmfan/ca_jun18 -> origin/xmfan/ca_jun18 2025-08-14T21:24:06.2057939Z * [new branch] xmfan/ca_jun24 -> origin/xmfan/ca_jun24 2025-08-14T21:24:06.2059054Z * [new branch] xmfan/ca_mem_base -> origin/xmfan/ca_mem_base 2025-08-14T21:24:06.2059681Z * [new branch] xmfan/ca_mem_fix -> origin/xmfan/ca_mem_fix 2025-08-14T21:24:06.2060003Z * [new branch] xmfan/ca_memory_fix -> origin/xmfan/ca_memory_fix 2025-08-14T21:24:06.2060189Z * [new branch] xmfan/ca_memory_fix_rebased -> origin/xmfan/ca_memory_fix_rebased 2025-08-14T21:24:06.2060359Z * [new branch] xmfan/ca_memory_fix_rebased2 -> origin/xmfan/ca_memory_fix_rebased2 2025-08-14T21:24:06.2060543Z * [new branch] xmfan/ca_move_to_cuda -> origin/xmfan/ca_move_to_cuda 2025-08-14T21:24:06.2060681Z * [new branch] xmfan/ca_nested -> origin/xmfan/ca_nested 2025-08-14T21:24:06.2067161Z * [new branch] xmfan/ca_overhead -> origin/xmfan/ca_overhead 2025-08-14T21:24:06.2069358Z * [new branch] xmfan/ca_overhead_0eba7e5451 -> origin/xmfan/ca_overhead_0eba7e5451 2025-08-14T21:24:06.2072693Z * [new branch] xmfan/ca_scalar -> origin/xmfan/ca_scalar 2025-08-14T21:24:06.2072996Z * [new branch] xmfan/ca_subclass_mem_fix -> origin/xmfan/ca_subclass_mem_fix 2025-08-14T21:24:06.2078244Z * [new branch] xmfan/ca_warm_mem -> origin/xmfan/ca_warm_mem 2025-08-14T21:24:06.2082425Z * [new branch] xmfan/ca_warm_mem_base -> origin/xmfan/ca_warm_mem_base 2025-08-14T21:24:06.2086933Z * [new branch] xmfan/cacu_jun18 -> origin/xmfan/cacu_jun18 2025-08-14T21:24:06.2087276Z * [new branch] xmfan/cacu_jun19 -> origin/xmfan/cacu_jun19 2025-08-14T21:24:06.2087483Z * [new branch] xmfan/cacu_jun4 -> origin/xmfan/cacu_jun4 2025-08-14T21:24:06.2087615Z * [new branch] xmfan/cacu_may27 -> origin/xmfan/cacu_may27 2025-08-14T21:24:06.2087779Z * [new branch] xmfan/circular_dep -> origin/xmfan/circular_dep 2025-08-14T21:24:06.2087971Z * [new branch] xmfan/compiled_autograd_feb_29 -> origin/xmfan/compiled_autograd_feb_29 2025-08-14T21:24:06.2088175Z * [new branch] xmfan/compiled_autograd_graph_breaks -> origin/xmfan/compiled_autograd_graph_breaks 2025-08-14T21:24:06.2088342Z * [new branch] xmfan/disable_duck_shape -> origin/xmfan/disable_duck_shape 2025-08-14T21:24:06.2088524Z * [new branch] xmfan/fca_cpp_node_passthrough -> origin/xmfan/fca_cpp_node_passthrough 2025-08-14T21:24:06.2088822Z * [new branch] xmfan/issue_123374 -> origin/xmfan/issue_123374 2025-08-14T21:24:06.2089110Z * [new branch] xmfan/post_3945954741e2d37023c5d6954f9483008e0892f9 -> origin/xmfan/post_3945954741e2d37023c5d6954f9483008e0892f9 2025-08-14T21:24:06.2089391Z * [new branch] xmfan/pre_3945954741e2d37023c5d6954f9483008e0892f9 -> origin/xmfan/pre_3945954741e2d37023c5d6954f9483008e0892f9 2025-08-14T21:24:06.2089577Z * [new branch] xmfan/segfault_test -> origin/xmfan/segfault_test 2025-08-14T21:24:06.2089714Z * [new branch] xmfan/single_step -> origin/xmfan/single_step 2025-08-14T21:24:06.2089840Z * [new branch] xmfan/sth_0829 -> origin/xmfan/sth_0829 2025-08-14T21:24:06.2089972Z * [new branch] xmfan/test -> origin/xmfan/test 2025-08-14T21:24:06.2090173Z * [new branch] y-do-we-have-7-build-systems -> origin/y-do-we-have-7-build-systems 2025-08-14T21:24:06.2090401Z * [new branch] yguo/debug-0226-constexpr -> origin/yguo/debug-0226-constexpr 2025-08-14T21:24:06.2090554Z * [new branch] yguo/new_latest_changes -> origin/yguo/new_latest_changes 2025-08-14T21:24:06.2090740Z * [new branch] yguo/patch_constexpr_changes -> origin/yguo/patch_constexpr_changes 2025-08-14T21:24:06.2090883Z * [new branch] yihan_quantization -> origin/yihan_quantization 2025-08-14T21:24:06.2091062Z * [new branch] yiming/add_nativert_benchmark -> origin/yiming/add_nativert_benchmark 2025-08-14T21:24:06.2091203Z * [new branch] yiming/bootcamp -> origin/yiming/bootcamp 2025-08-14T21:24:06.2091343Z * [new branch] zainr/canary-test -> origin/zainr/canary-test 2025-08-14T21:24:06.2091509Z * [new branch] zainr/cleanup-gh-runners -> origin/zainr/cleanup-gh-runners 2025-08-14T21:24:06.2091648Z * [new branch] zainr/fixlint -> origin/zainr/fixlint 2025-08-14T21:24:06.2091791Z * [new branch] zainr/git-push-v2 -> origin/zainr/git-push-v2 2025-08-14T21:24:06.2091933Z * [new branch] zainr/lint-py3.9 -> origin/zainr/lint-py3.9 2025-08-14T21:24:06.2092075Z * [new branch] zainr/mypy15-claude -> origin/zainr/mypy15-claude 2025-08-14T21:24:06.2092225Z * [new branch] zainr/pre-push-hooks -> origin/zainr/pre-push-hooks 2025-08-14T21:24:06.2092387Z * [new branch] zainr/pull-migration-c -> origin/zainr/pull-migration-c 2025-08-14T21:24:06.2092509Z * [new branch] zainr/test2 -> origin/zainr/test2 2025-08-14T21:24:06.2092651Z * [new branch] zainr/unstable -> origin/zainr/unstable 2025-08-14T21:24:06.2092813Z * [new branch] zainr/unstable-xla -> origin/zainr/unstable-xla 2025-08-14T21:24:06.2092951Z * [new branch] zainr/uv-pip-fix -> origin/zainr/uv-pip-fix 2025-08-14T21:24:06.2093092Z * [new branch] zainr/vs-aarch64 -> origin/zainr/vs-aarch64 2025-08-14T21:24:06.2093239Z * [new branch] zasdfgbnm-patch-3 -> origin/zasdfgbnm-patch-3 2025-08-14T21:24:06.2093356Z * [new branch] zb2p -> origin/zb2p 2025-08-14T21:24:06.2093506Z * [new branch] zdevito-patch-1 -> origin/zdevito-patch-1 2025-08-14T21:24:06.2093669Z * [new branch] zeros-and-scatter-part2 -> origin/zeros-and-scatter-part2 2025-08-14T21:24:06.2093855Z * [new branch] zhxchen17/nativert/0 -> origin/zhxchen17/nativert/0 2025-08-14T21:24:06.2095202Z * [new branch] zhxchen17/scratch/0 -> origin/zhxchen17/scratch/0 2025-08-14T21:24:06.2095929Z * [new branch] zhxhcen17/moodycamel -> origin/zhxhcen17/moodycamel 2025-08-14T21:24:06.2097210Z * [new branch] zxiiro/bazel -> origin/zxiiro/bazel 2025-08-14T21:24:06.2097518Z * [new branch] zxiiro/get-hardware -> origin/zxiiro/get-hardware 2025-08-14T21:24:06.2098620Z * [new branch] zxiiro/main -> origin/zxiiro/main 2025-08-14T21:24:06.2099080Z * [new branch] zxiiro/test -> origin/zxiiro/test 2025-08-14T21:24:06.2100426Z * [new tag] bc2caa7fdf006894eff7af936babde69ab5a40f8-huydhn-debug -> bc2caa7fdf006894eff7af936babde69ab5a40f8-huydhn-debug 2025-08-14T21:24:06.2100570Z * [new tag] ci/binaries/77164 -> ci/binaries/77164 2025-08-14T21:24:06.2101594Z * [new tag] ciflow/binaries/138996 -> ciflow/binaries/138996 2025-08-14T21:24:06.2101744Z * [new tag] ciflow/binaries/143959 -> ciflow/binaries/143959 2025-08-14T21:24:06.2105252Z * [new tag] ciflow/binaries/154595 -> ciflow/binaries/154595 2025-08-14T21:24:06.2105597Z * [new tag] ciflow/binaries/156049 -> ciflow/binaries/156049 2025-08-14T21:24:06.2105729Z * [new tag] ciflow/binaries/156712 -> ciflow/binaries/156712 2025-08-14T21:24:06.2105854Z * [new tag] ciflow/binaries/157432 -> ciflow/binaries/157432 2025-08-14T21:24:06.2105988Z * [new tag] ciflow/binaries/157685 -> ciflow/binaries/157685 2025-08-14T21:24:06.2106113Z * [new tag] ciflow/binaries/157689 -> ciflow/binaries/157689 2025-08-14T21:24:06.2106248Z * [new tag] ciflow/binaries/158104 -> ciflow/binaries/158104 2025-08-14T21:24:06.2106374Z * [new tag] ciflow/binaries/158623 -> ciflow/binaries/158623 2025-08-14T21:24:06.2106661Z * [new tag] ciflow/binaries/159827 -> ciflow/binaries/159827 2025-08-14T21:24:06.2106815Z * [new tag] ciflow/binaries/159869 -> ciflow/binaries/159869 2025-08-14T21:24:06.2107374Z * [new tag] ciflow/binaries/160593 -> ciflow/binaries/160593 2025-08-14T21:24:06.2108444Z * [new tag] ciflow/binaries_libtorch/143959 -> ciflow/binaries_libtorch/143959 2025-08-14T21:24:06.2108608Z * [new tag] ciflow/binaries_libtorch/156049 -> ciflow/binaries_libtorch/156049 2025-08-14T21:24:06.2108929Z * [new tag] ciflow/binaries_libtorch/157432 -> ciflow/binaries_libtorch/157432 2025-08-14T21:24:06.2111524Z * [new tag] ciflow/binaries_wheel/143959 -> ciflow/binaries_wheel/143959 2025-08-14T21:24:06.2111875Z * [new tag] ciflow/binaries_wheel/156049 -> ciflow/binaries_wheel/156049 2025-08-14T21:24:06.2112065Z * [new tag] ciflow/binaries_wheel/157432 -> ciflow/binaries_wheel/157432 2025-08-14T21:24:06.2112229Z * [new tag] ciflow/binaries_wheel/158733 -> ciflow/binaries_wheel/158733 2025-08-14T21:24:06.2112389Z * [new tag] ciflow/binaries_wheel/160301 -> ciflow/binaries_wheel/160301 2025-08-14T21:24:06.2112693Z * [new tag] ciflow/binaries_wheel/160496 -> ciflow/binaries_wheel/160496 2025-08-14T21:24:06.2112899Z * [new tag] ciflow/h100-distributed/156703 -> ciflow/h100-distributed/156703 2025-08-14T21:24:06.2113145Z * [new tag] ciflow/h100-symm-mem/151845 -> ciflow/h100-symm-mem/151845 2025-08-14T21:24:06.2113724Z * [new tag] ciflow/h100-symm-mem/155923 -> ciflow/h100-symm-mem/155923 2025-08-14T21:24:06.2114016Z * [new tag] ciflow/h100-symm-mem/157635 -> ciflow/h100-symm-mem/157635 2025-08-14T21:24:06.2114857Z * [new tag] ciflow/h100-symm-mem/159118 -> ciflow/h100-symm-mem/159118 2025-08-14T21:24:06.2115011Z * [new tag] ciflow/h100-symm-mem/159562 -> ciflow/h100-symm-mem/159562 2025-08-14T21:24:06.2118308Z * [new tag] ciflow/h100-symm-mem/159889 -> ciflow/h100-symm-mem/159889 2025-08-14T21:24:06.2118624Z * [new tag] ciflow/h100/159158 -> ciflow/h100/159158 2025-08-14T21:24:06.2118942Z * [new tag] ciflow/h100/160450 -> ciflow/h100/160450 2025-08-14T21:24:06.2119066Z * [new tag] ciflow/h100/160480 -> ciflow/h100/160480 2025-08-14T21:24:06.2119193Z * [new tag] ciflow/h100/160614 -> ciflow/h100/160614 2025-08-14T21:24:06.2119597Z * [new tag] ciflow/inductor-perf-test-nightly-rocm/151845 -> ciflow/inductor-perf-test-nightly-rocm/151845 2025-08-14T21:24:06.2120285Z * [new tag] ciflow/inductor-perf-test-nightly-rocm/160538 -> ciflow/inductor-perf-test-nightly-rocm/160538 2025-08-14T21:24:06.2120615Z * [new tag] ciflow/inductor-perf-test-nightly-x86-zen/156599 -> ciflow/inductor-perf-test-nightly-x86-zen/156599 2025-08-14T21:24:06.2120829Z * [new tag] ciflow/inductor-periodic/160406 -> ciflow/inductor-periodic/160406 2025-08-14T21:24:06.2121664Z * [new tag] ciflow/inductor-periodic/160538 -> ciflow/inductor-periodic/160538 2025-08-14T21:24:06.2121938Z * [new tag] ciflow/inductor-rocm/151845 -> ciflow/inductor-rocm/151845 2025-08-14T21:24:06.2122175Z * [new tag] ciflow/inductor-rocm/159158 -> ciflow/inductor-rocm/159158 2025-08-14T21:24:06.2127058Z * [new tag] ciflow/inductor-rocm/160073 -> ciflow/inductor-rocm/160073 2025-08-14T21:24:06.2131370Z * [new tag] ciflow/inductor-rocm/160538 -> ciflow/inductor-rocm/160538 2025-08-14T21:24:06.2131553Z * [new tag] ciflow/inductor/134881 -> ciflow/inductor/134881 2025-08-14T21:24:06.2131700Z * [new tag] ciflow/inductor/137400 -> ciflow/inductor/137400 2025-08-14T21:24:06.2131830Z * [new tag] ciflow/inductor/144516 -> ciflow/inductor/144516 2025-08-14T21:24:06.2131959Z * [new tag] ciflow/inductor/146506 -> ciflow/inductor/146506 2025-08-14T21:24:06.2132114Z * [new tag] ciflow/inductor/147360 -> ciflow/inductor/147360 2025-08-14T21:24:06.2132240Z * [new tag] ciflow/inductor/147990 -> ciflow/inductor/147990 2025-08-14T21:24:06.2132369Z * [new tag] ciflow/inductor/148180 -> ciflow/inductor/148180 2025-08-14T21:24:06.2132491Z * [new tag] ciflow/inductor/148328 -> ciflow/inductor/148328 2025-08-14T21:24:06.2132612Z * [new tag] ciflow/inductor/148484 -> ciflow/inductor/148484 2025-08-14T21:24:06.2132741Z * [new tag] ciflow/inductor/148492 -> ciflow/inductor/148492 2025-08-14T21:24:06.2132862Z * [new tag] ciflow/inductor/150302 -> ciflow/inductor/150302 2025-08-14T21:24:06.2132992Z * [new tag] ciflow/inductor/151845 -> ciflow/inductor/151845 2025-08-14T21:24:06.2133124Z * [new tag] ciflow/inductor/152198 -> ciflow/inductor/152198 2025-08-14T21:24:06.2133250Z * [new tag] ciflow/inductor/152624 -> ciflow/inductor/152624 2025-08-14T21:24:06.2133383Z * [new tag] ciflow/inductor/153966 -> ciflow/inductor/153966 2025-08-14T21:24:06.2133506Z * [new tag] ciflow/inductor/154193 -> ciflow/inductor/154193 2025-08-14T21:24:06.2133635Z * [new tag] ciflow/inductor/154650 -> ciflow/inductor/154650 2025-08-14T21:24:06.2133757Z * [new tag] ciflow/inductor/154694 -> ciflow/inductor/154694 2025-08-14T21:24:06.2133877Z * [new tag] ciflow/inductor/155072 -> ciflow/inductor/155072 2025-08-14T21:24:06.2134004Z * [new tag] ciflow/inductor/155152 -> ciflow/inductor/155152 2025-08-14T21:24:06.2134128Z * [new tag] ciflow/inductor/155153 -> ciflow/inductor/155153 2025-08-14T21:24:06.2134261Z * [new tag] ciflow/inductor/155154 -> ciflow/inductor/155154 2025-08-14T21:24:06.2134392Z * [new tag] ciflow/inductor/155501 -> ciflow/inductor/155501 2025-08-14T21:24:06.2134791Z * [new tag] ciflow/inductor/155502 -> ciflow/inductor/155502 2025-08-14T21:24:06.2135317Z * [new tag] ciflow/inductor/155503 -> ciflow/inductor/155503 2025-08-14T21:24:06.2135824Z * [new tag] ciflow/inductor/155504 -> ciflow/inductor/155504 2025-08-14T21:24:06.2136407Z * [new tag] ciflow/inductor/155557 -> ciflow/inductor/155557 2025-08-14T21:24:06.2137037Z * [new tag] ciflow/inductor/155608 -> ciflow/inductor/155608 2025-08-14T21:24:06.2137483Z * [new tag] ciflow/inductor/155923 -> ciflow/inductor/155923 2025-08-14T21:24:06.2137729Z * [new tag] ciflow/inductor/155928 -> ciflow/inductor/155928 2025-08-14T21:24:06.2138626Z * [new tag] ciflow/inductor/155958 -> ciflow/inductor/155958 2025-08-14T21:24:06.2138891Z * [new tag] ciflow/inductor/156049 -> ciflow/inductor/156049 2025-08-14T21:24:06.2140036Z * [new tag] ciflow/inductor/156851 -> ciflow/inductor/156851 2025-08-14T21:24:06.2140267Z * [new tag] ciflow/inductor/156967 -> ciflow/inductor/156967 2025-08-14T21:24:06.2140812Z * [new tag] ciflow/inductor/157148 -> ciflow/inductor/157148 2025-08-14T21:24:06.2141180Z * [new tag] ciflow/inductor/157149 -> ciflow/inductor/157149 2025-08-14T21:24:06.2141702Z * [new tag] ciflow/inductor/157152 -> ciflow/inductor/157152 2025-08-14T21:24:06.2142184Z * [new tag] ciflow/inductor/157542 -> ciflow/inductor/157542 2025-08-14T21:24:06.2147496Z * [new tag] ciflow/inductor/157572 -> ciflow/inductor/157572 2025-08-14T21:24:06.2147649Z * [new tag] ciflow/inductor/157635 -> ciflow/inductor/157635 2025-08-14T21:24:06.2148115Z * [new tag] ciflow/inductor/157685 -> ciflow/inductor/157685 2025-08-14T21:24:06.2148587Z * [new tag] ciflow/inductor/157686 -> ciflow/inductor/157686 2025-08-14T21:24:06.2149046Z * [new tag] ciflow/inductor/157689 -> ciflow/inductor/157689 2025-08-14T21:24:06.2150311Z * [new tag] ciflow/inductor/157699 -> ciflow/inductor/157699 2025-08-14T21:24:06.2150480Z * [new tag] ciflow/inductor/157743 -> ciflow/inductor/157743 2025-08-14T21:24:06.2150643Z * [new tag] ciflow/inductor/157944 -> ciflow/inductor/157944 2025-08-14T21:24:06.2151111Z * [new tag] ciflow/inductor/157971 -> ciflow/inductor/157971 2025-08-14T21:24:06.2152398Z * [new tag] ciflow/inductor/157994 -> ciflow/inductor/157994 2025-08-14T21:24:06.2152727Z * [new tag] ciflow/inductor/158061 -> ciflow/inductor/158061 2025-08-14T21:24:06.2152876Z * [new tag] ciflow/inductor/158091 -> ciflow/inductor/158091 2025-08-14T21:24:06.2153255Z * [new tag] ciflow/inductor/158097 -> ciflow/inductor/158097 2025-08-14T21:24:06.2153636Z * [new tag] ciflow/inductor/158098 -> ciflow/inductor/158098 2025-08-14T21:24:06.2154047Z * [new tag] ciflow/inductor/158104 -> ciflow/inductor/158104 2025-08-14T21:24:06.2155439Z * [new tag] ciflow/inductor/158168 -> ciflow/inductor/158168 2025-08-14T21:24:06.2155762Z * [new tag] ciflow/inductor/158250 -> ciflow/inductor/158250 2025-08-14T21:24:06.2155903Z * [new tag] ciflow/inductor/158321 -> ciflow/inductor/158321 2025-08-14T21:24:06.2156108Z * [new tag] ciflow/inductor/158609 -> ciflow/inductor/158609 2025-08-14T21:24:06.2156509Z * [new tag] ciflow/inductor/158647 -> ciflow/inductor/158647 2025-08-14T21:24:06.2159603Z * [new tag] ciflow/inductor/158914 -> ciflow/inductor/158914 2025-08-14T21:24:06.2160164Z * [new tag] ciflow/inductor/158932 -> ciflow/inductor/158932 2025-08-14T21:24:06.2160444Z * [new tag] ciflow/inductor/158987 -> ciflow/inductor/158987 2025-08-14T21:24:06.2160580Z * [new tag] ciflow/inductor/159009 -> ciflow/inductor/159009 2025-08-14T21:24:06.2160699Z * [new tag] ciflow/inductor/159010 -> ciflow/inductor/159010 2025-08-14T21:24:06.2160829Z * [new tag] ciflow/inductor/159093 -> ciflow/inductor/159093 2025-08-14T21:24:06.2161079Z * [new tag] ciflow/inductor/159158 -> ciflow/inductor/159158 2025-08-14T21:24:06.2161235Z * [new tag] ciflow/inductor/159197 -> ciflow/inductor/159197 2025-08-14T21:24:06.2161445Z * [new tag] ciflow/inductor/159274 -> ciflow/inductor/159274 2025-08-14T21:24:06.2162086Z * [new tag] ciflow/inductor/159281 -> ciflow/inductor/159281 2025-08-14T21:24:06.2162504Z * [new tag] ciflow/inductor/159329 -> ciflow/inductor/159329 2025-08-14T21:24:06.2162634Z * [new tag] ciflow/inductor/159361 -> ciflow/inductor/159361 2025-08-14T21:24:06.2163033Z * [new tag] ciflow/inductor/159365 -> ciflow/inductor/159365 2025-08-14T21:24:06.2163390Z * [new tag] ciflow/inductor/159366 -> ciflow/inductor/159366 2025-08-14T21:24:06.2163827Z * [new tag] ciflow/inductor/159367 -> ciflow/inductor/159367 2025-08-14T21:24:06.2164211Z * [new tag] ciflow/inductor/159368 -> ciflow/inductor/159368 2025-08-14T21:24:06.2165430Z * [new tag] ciflow/inductor/159473 -> ciflow/inductor/159473 2025-08-14T21:24:06.2165677Z * [new tag] ciflow/inductor/159483 -> ciflow/inductor/159483 2025-08-14T21:24:06.2165821Z * [new tag] ciflow/inductor/159508 -> ciflow/inductor/159508 2025-08-14T21:24:06.2166107Z * [new tag] ciflow/inductor/159523 -> ciflow/inductor/159523 2025-08-14T21:24:06.2166560Z * [new tag] ciflow/inductor/159678 -> ciflow/inductor/159678 2025-08-14T21:24:06.2166998Z * [new tag] ciflow/inductor/159691 -> ciflow/inductor/159691 2025-08-14T21:24:06.2168265Z * [new tag] ciflow/inductor/159778 -> ciflow/inductor/159778 2025-08-14T21:24:06.2168529Z * [new tag] ciflow/inductor/159786 -> ciflow/inductor/159786 2025-08-14T21:24:06.2168669Z * [new tag] ciflow/inductor/159817 -> ciflow/inductor/159817 2025-08-14T21:24:06.2169054Z * [new tag] ciflow/inductor/159842 -> ciflow/inductor/159842 2025-08-14T21:24:06.2172129Z * [new tag] ciflow/inductor/159864 -> ciflow/inductor/159864 2025-08-14T21:24:06.2172434Z * [new tag] ciflow/inductor/159865 -> ciflow/inductor/159865 2025-08-14T21:24:06.2172592Z * [new tag] ciflow/inductor/159869 -> ciflow/inductor/159869 2025-08-14T21:24:06.2172720Z * [new tag] ciflow/inductor/159875 -> ciflow/inductor/159875 2025-08-14T21:24:06.2172848Z * [new tag] ciflow/inductor/159889 -> ciflow/inductor/159889 2025-08-14T21:24:06.2173088Z * [new tag] ciflow/inductor/159902 -> ciflow/inductor/159902 2025-08-14T21:24:06.2173213Z * [new tag] ciflow/inductor/159923 -> ciflow/inductor/159923 2025-08-14T21:24:06.2173419Z * [new tag] ciflow/inductor/159944 -> ciflow/inductor/159944 2025-08-14T21:24:06.2173564Z * [new tag] ciflow/inductor/160004 -> ciflow/inductor/160004 2025-08-14T21:24:06.2173697Z * [new tag] ciflow/inductor/160080 -> ciflow/inductor/160080 2025-08-14T21:24:06.2174751Z * [new tag] ciflow/inductor/160108 -> ciflow/inductor/160108 2025-08-14T21:24:06.2174901Z * [new tag] ciflow/inductor/160109 -> ciflow/inductor/160109 2025-08-14T21:24:06.2177884Z * [new tag] ciflow/inductor/160111 -> ciflow/inductor/160111 2025-08-14T21:24:06.2179683Z * [new tag] ciflow/inductor/160113 -> ciflow/inductor/160113 2025-08-14T21:24:06.2179974Z * [new tag] ciflow/inductor/160127 -> ciflow/inductor/160127 2025-08-14T21:24:06.2180218Z * [new tag] ciflow/inductor/160131 -> ciflow/inductor/160131 2025-08-14T21:24:06.2180357Z * [new tag] ciflow/inductor/160132 -> ciflow/inductor/160132 2025-08-14T21:24:06.2180485Z * [new tag] ciflow/inductor/160136 -> ciflow/inductor/160136 2025-08-14T21:24:06.2180634Z * [new tag] ciflow/inductor/160138 -> ciflow/inductor/160138 2025-08-14T21:24:06.2180761Z * [new tag] ciflow/inductor/160151 -> ciflow/inductor/160151 2025-08-14T21:24:06.2180887Z * [new tag] ciflow/inductor/160152 -> ciflow/inductor/160152 2025-08-14T21:24:06.2181094Z * [new tag] ciflow/inductor/160154 -> ciflow/inductor/160154 2025-08-14T21:24:06.2181227Z * [new tag] ciflow/inductor/160156 -> ciflow/inductor/160156 2025-08-14T21:24:06.2186842Z * [new tag] ciflow/inductor/160161 -> ciflow/inductor/160161 2025-08-14T21:24:06.2191763Z * [new tag] ciflow/inductor/160166 -> ciflow/inductor/160166 2025-08-14T21:24:06.2196568Z * [new tag] ciflow/inductor/160168 -> ciflow/inductor/160168 2025-08-14T21:24:06.2201910Z * [new tag] ciflow/inductor/160174 -> ciflow/inductor/160174 2025-08-14T21:24:06.2202248Z * [new tag] ciflow/inductor/160181 -> ciflow/inductor/160181 2025-08-14T21:24:06.2202391Z * [new tag] ciflow/inductor/160183 -> ciflow/inductor/160183 2025-08-14T21:24:06.2202509Z * [new tag] ciflow/inductor/160190 -> ciflow/inductor/160190 2025-08-14T21:24:06.2202677Z * [new tag] ciflow/inductor/160198 -> ciflow/inductor/160198 2025-08-14T21:24:06.2202796Z * [new tag] ciflow/inductor/160201 -> ciflow/inductor/160201 2025-08-14T21:24:06.2202911Z * [new tag] ciflow/inductor/160209 -> ciflow/inductor/160209 2025-08-14T21:24:06.2203036Z * [new tag] ciflow/inductor/160218 -> ciflow/inductor/160218 2025-08-14T21:24:06.2203156Z * [new tag] ciflow/inductor/160239 -> ciflow/inductor/160239 2025-08-14T21:24:06.2203289Z * [new tag] ciflow/inductor/160250 -> ciflow/inductor/160250 2025-08-14T21:24:06.2203414Z * [new tag] ciflow/inductor/160253 -> ciflow/inductor/160253 2025-08-14T21:24:06.2203537Z * [new tag] ciflow/inductor/160266 -> ciflow/inductor/160266 2025-08-14T21:24:06.2203666Z * [new tag] ciflow/inductor/160282 -> ciflow/inductor/160282 2025-08-14T21:24:06.2203794Z * [new tag] ciflow/inductor/160298 -> ciflow/inductor/160298 2025-08-14T21:24:06.2203916Z * [new tag] ciflow/inductor/160301 -> ciflow/inductor/160301 2025-08-14T21:24:06.2204043Z * [new tag] ciflow/inductor/160310 -> ciflow/inductor/160310 2025-08-14T21:24:06.2204162Z * [new tag] ciflow/inductor/160323 -> ciflow/inductor/160323 2025-08-14T21:24:06.2204296Z * [new tag] ciflow/inductor/160324 -> ciflow/inductor/160324 2025-08-14T21:24:06.2204415Z * [new tag] ciflow/inductor/160325 -> ciflow/inductor/160325 2025-08-14T21:24:06.2204529Z * [new tag] ciflow/inductor/160326 -> ciflow/inductor/160326 2025-08-14T21:24:06.2204649Z * [new tag] ciflow/inductor/160327 -> ciflow/inductor/160327 2025-08-14T21:24:06.2204763Z * [new tag] ciflow/inductor/160328 -> ciflow/inductor/160328 2025-08-14T21:24:06.2205025Z * [new tag] ciflow/inductor/160329 -> ciflow/inductor/160329 2025-08-14T21:24:06.2205155Z * [new tag] ciflow/inductor/160351 -> ciflow/inductor/160351 2025-08-14T21:24:06.2205269Z * [new tag] ciflow/inductor/160353 -> ciflow/inductor/160353 2025-08-14T21:24:06.2205398Z * [new tag] ciflow/inductor/160362 -> ciflow/inductor/160362 2025-08-14T21:24:06.2205512Z * [new tag] ciflow/inductor/160363 -> ciflow/inductor/160363 2025-08-14T21:24:06.2205634Z * [new tag] ciflow/inductor/160364 -> ciflow/inductor/160364 2025-08-14T21:24:06.2205768Z * [new tag] ciflow/inductor/160365 -> ciflow/inductor/160365 2025-08-14T21:24:06.2205888Z * [new tag] ciflow/inductor/160366 -> ciflow/inductor/160366 2025-08-14T21:24:06.2206017Z * [new tag] ciflow/inductor/160367 -> ciflow/inductor/160367 2025-08-14T21:24:06.2206220Z * [new tag] ciflow/inductor/160368 -> ciflow/inductor/160368 2025-08-14T21:24:06.2206337Z * [new tag] ciflow/inductor/160369 -> ciflow/inductor/160369 2025-08-14T21:24:06.2206458Z * [new tag] ciflow/inductor/160371 -> ciflow/inductor/160371 2025-08-14T21:24:06.2206574Z * [new tag] ciflow/inductor/160374 -> ciflow/inductor/160374 2025-08-14T21:24:06.2206694Z * [new tag] ciflow/inductor/160375 -> ciflow/inductor/160375 2025-08-14T21:24:06.2206807Z * [new tag] ciflow/inductor/160377 -> ciflow/inductor/160377 2025-08-14T21:24:06.2206919Z * [new tag] ciflow/inductor/160380 -> ciflow/inductor/160380 2025-08-14T21:24:06.2207038Z * [new tag] ciflow/inductor/160381 -> ciflow/inductor/160381 2025-08-14T21:24:06.2207158Z * [new tag] ciflow/inductor/160383 -> ciflow/inductor/160383 2025-08-14T21:24:06.2207283Z * [new tag] ciflow/inductor/160394 -> ciflow/inductor/160394 2025-08-14T21:24:06.2207402Z * [new tag] ciflow/inductor/160401 -> ciflow/inductor/160401 2025-08-14T21:24:06.2207521Z * [new tag] ciflow/inductor/160402 -> ciflow/inductor/160402 2025-08-14T21:24:06.2207647Z * [new tag] ciflow/inductor/160403 -> ciflow/inductor/160403 2025-08-14T21:24:06.2207769Z * [new tag] ciflow/inductor/160424 -> ciflow/inductor/160424 2025-08-14T21:24:06.2207888Z * [new tag] ciflow/inductor/160426 -> ciflow/inductor/160426 2025-08-14T21:24:06.2208026Z * [new tag] ciflow/inductor/160431 -> ciflow/inductor/160431 2025-08-14T21:24:06.2208140Z * [new tag] ciflow/inductor/160448 -> ciflow/inductor/160448 2025-08-14T21:24:06.2208269Z * [new tag] ciflow/inductor/160450 -> ciflow/inductor/160450 2025-08-14T21:24:06.2208387Z * [new tag] ciflow/inductor/160455 -> ciflow/inductor/160455 2025-08-14T21:24:06.2209442Z * [new tag] ciflow/inductor/160456 -> ciflow/inductor/160456 2025-08-14T21:24:06.2210048Z * [new tag] ciflow/inductor/160461 -> ciflow/inductor/160461 2025-08-14T21:24:06.2210208Z * [new tag] ciflow/inductor/160462 -> ciflow/inductor/160462 2025-08-14T21:24:06.2210692Z * [new tag] ciflow/inductor/160467 -> ciflow/inductor/160467 2025-08-14T21:24:06.2211119Z * [new tag] ciflow/inductor/160470 -> ciflow/inductor/160470 2025-08-14T21:24:06.2211558Z * [new tag] ciflow/inductor/160473 -> ciflow/inductor/160473 2025-08-14T21:24:06.2211949Z * [new tag] ciflow/inductor/160476 -> ciflow/inductor/160476 2025-08-14T21:24:06.2212401Z * [new tag] ciflow/inductor/160480 -> ciflow/inductor/160480 2025-08-14T21:24:06.2214128Z * [new tag] ciflow/inductor/160481 -> ciflow/inductor/160481 2025-08-14T21:24:06.2214437Z * [new tag] ciflow/inductor/160482 -> ciflow/inductor/160482 2025-08-14T21:24:06.2218860Z * [new tag] ciflow/inductor/160483 -> ciflow/inductor/160483 2025-08-14T21:24:06.2219000Z * [new tag] ciflow/inductor/160485 -> ciflow/inductor/160485 2025-08-14T21:24:06.2219151Z * [new tag] ciflow/inductor/160486 -> ciflow/inductor/160486 2025-08-14T21:24:06.2219275Z * [new tag] ciflow/inductor/160503 -> ciflow/inductor/160503 2025-08-14T21:24:06.2219398Z * [new tag] ciflow/inductor/160510 -> ciflow/inductor/160510 2025-08-14T21:24:06.2219529Z * [new tag] ciflow/inductor/160527 -> ciflow/inductor/160527 2025-08-14T21:24:06.2219649Z * [new tag] ciflow/inductor/160530 -> ciflow/inductor/160530 2025-08-14T21:24:06.2219923Z * [new tag] ciflow/inductor/160531 -> ciflow/inductor/160531 2025-08-14T21:24:06.2220124Z * [new tag] ciflow/inductor/160538 -> ciflow/inductor/160538 2025-08-14T21:24:06.2220249Z * [new tag] ciflow/inductor/160539 -> ciflow/inductor/160539 2025-08-14T21:24:06.2220383Z * [new tag] ciflow/inductor/160540 -> ciflow/inductor/160540 2025-08-14T21:24:06.2220512Z * [new tag] ciflow/inductor/160548 -> ciflow/inductor/160548 2025-08-14T21:24:06.2220649Z * [new tag] ciflow/inductor/160561 -> ciflow/inductor/160561 2025-08-14T21:24:06.2220774Z * [new tag] ciflow/inductor/160576 -> ciflow/inductor/160576 2025-08-14T21:24:06.2221088Z * [new tag] ciflow/inductor/160578 -> ciflow/inductor/160578 2025-08-14T21:24:06.2221234Z * [new tag] ciflow/inductor/160580 -> ciflow/inductor/160580 2025-08-14T21:24:06.2229877Z * [new tag] ciflow/inductor/160583 -> ciflow/inductor/160583 2025-08-14T21:24:06.2234260Z * [new tag] ciflow/inductor/160589 -> ciflow/inductor/160589 2025-08-14T21:24:06.2239351Z * [new tag] ciflow/inductor/160590 -> ciflow/inductor/160590 2025-08-14T21:24:06.2239531Z * [new tag] ciflow/inductor/160592 -> ciflow/inductor/160592 2025-08-14T21:24:06.2239670Z * [new tag] ciflow/inductor/160596 -> ciflow/inductor/160596 2025-08-14T21:24:06.2239797Z * [new tag] ciflow/inductor/160601 -> ciflow/inductor/160601 2025-08-14T21:24:06.2239918Z * [new tag] ciflow/inductor/160607 -> ciflow/inductor/160607 2025-08-14T21:24:06.2240047Z * [new tag] ciflow/inductor/160608 -> ciflow/inductor/160608 2025-08-14T21:24:06.2240168Z * [new tag] ciflow/inductor/160611 -> ciflow/inductor/160611 2025-08-14T21:24:06.2240289Z * [new tag] ciflow/inductor/160614 -> ciflow/inductor/160614 2025-08-14T21:24:06.2240440Z * [new tag] ciflow/inductor/160616 -> ciflow/inductor/160616 2025-08-14T21:24:06.2240564Z * [new tag] ciflow/inductor/160619 -> ciflow/inductor/160619 2025-08-14T21:24:06.2240693Z * [new tag] ciflow/inductor/160625 -> ciflow/inductor/160625 2025-08-14T21:24:06.2240812Z * [new tag] ciflow/inductor/160635 -> ciflow/inductor/160635 2025-08-14T21:24:06.2240930Z * [new tag] ciflow/inductor/160649 -> ciflow/inductor/160649 2025-08-14T21:24:06.2241059Z * [new tag] ciflow/inductor/160658 -> ciflow/inductor/160658 2025-08-14T21:24:06.2241179Z * [new tag] ciflow/inductor/160662 -> ciflow/inductor/160662 2025-08-14T21:24:06.2241305Z * [new tag] ciflow/inductor/160668 -> ciflow/inductor/160668 2025-08-14T21:24:06.2241427Z * [new tag] ciflow/inductor/160669 -> ciflow/inductor/160669 2025-08-14T21:24:06.2241552Z * [new tag] ciflow/inductor/160670 -> ciflow/inductor/160670 2025-08-14T21:24:06.2242026Z * [new tag] ciflow/inductor/160671 -> ciflow/inductor/160671 2025-08-14T21:24:06.2242178Z * [new tag] ciflow/inductor/160677 -> ciflow/inductor/160677 2025-08-14T21:24:06.2242298Z * [new tag] ciflow/inductor/160679 -> ciflow/inductor/160679 2025-08-14T21:24:06.2242442Z * [new tag] ciflow/inductor/3b9a386 -> ciflow/inductor/3b9a386 2025-08-14T21:24:06.2242570Z * [new tag] ciflow/inductor/3d4b92b -> ciflow/inductor/3d4b92b 2025-08-14T21:24:06.2242703Z * [new tag] ciflow/inductor/d224ac7 -> ciflow/inductor/d224ac7 2025-08-14T21:24:06.2242850Z * [new tag] ciflow/linux-aarch64/147855 -> ciflow/linux-aarch64/147855 2025-08-14T21:24:06.2242991Z * [new tag] ciflow/linux-aarch64/157994 -> ciflow/linux-aarch64/157994 2025-08-14T21:24:06.2243217Z * [new tag] ciflow/linux-aarch64/159737 -> ciflow/linux-aarch64/159737 2025-08-14T21:24:06.2243361Z * [new tag] ciflow/linux-aarch64/160078 -> ciflow/linux-aarch64/160078 2025-08-14T21:24:06.2243500Z * [new tag] ciflow/linux-aarch64/160299 -> ciflow/linux-aarch64/160299 2025-08-14T21:24:06.2243631Z * [new tag] ciflow/linux-aarch64/160301 -> ciflow/linux-aarch64/160301 2025-08-14T21:24:06.2243754Z * [new tag] ciflow/mps/155923 -> ciflow/mps/155923 2025-08-14T21:24:06.2243881Z * [new tag] ciflow/mps/157553 -> ciflow/mps/157553 2025-08-14T21:24:06.2243993Z * [new tag] ciflow/mps/157635 -> ciflow/mps/157635 2025-08-14T21:24:06.2244111Z * [new tag] ciflow/mps/160541 -> ciflow/mps/160541 2025-08-14T21:24:06.2244243Z * [new tag] ciflow/nightly/156049 -> ciflow/nightly/156049 2025-08-14T21:24:06.2244370Z * [new tag] ciflow/nightly/158104 -> ciflow/nightly/158104 2025-08-14T21:24:06.2244539Z * [new tag] ciflow/op-benchmark/157994 -> ciflow/op-benchmark/157994 2025-08-14T21:24:06.2244726Z * [new tag] ciflow/periodic-rocm-mi300/139971 -> ciflow/periodic-rocm-mi300/139971 2025-08-14T21:24:06.2244917Z * [new tag] ciflow/periodic-rocm-mi300/160073 -> ciflow/periodic-rocm-mi300/160073 2025-08-14T21:24:06.2245087Z * [new tag] ciflow/periodic-rocm-mi300/160538 -> ciflow/periodic-rocm-mi300/160538 2025-08-14T21:24:06.2245219Z * [new tag] ciflow/periodic/054a2fd -> ciflow/periodic/054a2fd 2025-08-14T21:24:06.2245353Z * [new tag] ciflow/periodic/131296 -> ciflow/periodic/131296 2025-08-14T21:24:06.2245492Z * [new tag] ciflow/periodic/139971 -> ciflow/periodic/139971 2025-08-14T21:24:06.2245614Z * [new tag] ciflow/periodic/143959 -> ciflow/periodic/143959 2025-08-14T21:24:06.2245749Z * [new tag] ciflow/periodic/154595 -> ciflow/periodic/154595 2025-08-14T21:24:06.2245873Z * [new tag] ciflow/periodic/156703 -> ciflow/periodic/156703 2025-08-14T21:24:06.2245999Z * [new tag] ciflow/periodic/160201 -> ciflow/periodic/160201 2025-08-14T21:24:06.2250341Z * [new tag] ciflow/periodic/160424 -> ciflow/periodic/160424 2025-08-14T21:24:06.2253107Z * [new tag] ciflow/periodic/160538 -> ciflow/periodic/160538 2025-08-14T21:24:06.2253640Z * [new tag] ciflow/periodic/1febab2a89302464f6c7d69cfbef7a24c421ea65 -> ciflow/periodic/1febab2a89302464f6c7d69cfbef7a24c421ea65 2025-08-14T21:24:06.2253963Z * [new tag] ciflow/periodic/2a6d37d -> ciflow/periodic/2a6d37d 2025-08-14T21:24:06.2254543Z * [new tag] ciflow/periodic/2ee22e435131369a7e4f8cc4732579acc29a941b -> ciflow/periodic/2ee22e435131369a7e4f8cc4732579acc29a941b 2025-08-14T21:24:06.2254913Z * [new tag] ciflow/periodic/317eeb8 -> ciflow/periodic/317eeb8 2025-08-14T21:24:06.2255075Z * [new tag] ciflow/periodic/3c32 -> ciflow/periodic/3c32 2025-08-14T21:24:06.2255223Z * [new tag] ciflow/periodic/3e98831 -> ciflow/periodic/3e98831 2025-08-14T21:24:06.2255542Z * [new tag] ciflow/periodic/4a773e1e867f28a8ff0b15203e5cd9548f74fcee -> ciflow/periodic/4a773e1e867f28a8ff0b15203e5cd9548f74fcee 2025-08-14T21:24:06.2255863Z * [new tag] ciflow/periodic/5f5f508aa836a46dfe88857fb223049616b94e93 -> ciflow/periodic/5f5f508aa836a46dfe88857fb223049616b94e93 2025-08-14T21:24:06.2256016Z * [new tag] ciflow/periodic/94512-point -> ciflow/periodic/94512-point 2025-08-14T21:24:06.2256178Z * [new tag] ciflow/periodic/csl/test87519 -> ciflow/periodic/csl/test87519 2025-08-14T21:24:06.2256346Z * [new tag] ciflow/periodic/csltest88275 -> ciflow/periodic/csltest88275 2025-08-14T21:24:06.2256554Z * [new tag] ciflow/periodic/csltest88761 -> ciflow/periodic/csltest88761 2025-08-14T21:24:06.2256873Z * [new tag] ciflow/periodic/d7114f05b10de8e6de81ffc567d63944c3117d51 -> ciflow/periodic/d7114f05b10de8e6de81ffc567d63944c3117d51 2025-08-14T21:24:06.2257024Z * [new tag] ciflow/periodic/release_1.12 -> ciflow/periodic/release_1.12 2025-08-14T21:24:06.2257190Z * [new tag] ciflow/periodic/release_1.12.0 -> ciflow/periodic/release_1.12.0 2025-08-14T21:24:06.2257349Z * [new tag] ciflow/periodic/sha-ec5b83 -> ciflow/periodic/sha-ec5b83 2025-08-14T21:24:06.2257492Z * [new tag] ciflow/rocm-mi300/151360 -> ciflow/rocm-mi300/151360 2025-08-14T21:24:06.2257636Z * [new tag] ciflow/rocm-mi300/159158 -> ciflow/rocm-mi300/159158 2025-08-14T21:24:06.2257773Z * [new tag] ciflow/rocm-mi300/160073 -> ciflow/rocm-mi300/160073 2025-08-14T21:24:06.2257905Z * [new tag] ciflow/rocm-mi300/160468 -> ciflow/rocm-mi300/160468 2025-08-14T21:24:06.2258044Z * [new tag] ciflow/rocm-mi300/160538 -> ciflow/rocm-mi300/160538 2025-08-14T21:24:06.2258174Z * [new tag] ciflow/rocm-mi355/160215 -> ciflow/rocm-mi355/160215 2025-08-14T21:24:06.2258306Z * [new tag] ciflow/rocm/148492 -> ciflow/rocm/148492 2025-08-14T21:24:06.2258435Z * [new tag] ciflow/rocm/151360 -> ciflow/rocm/151360 2025-08-14T21:24:06.2258554Z * [new tag] ciflow/rocm/151845 -> ciflow/rocm/151845 2025-08-14T21:24:06.2259172Z * [new tag] ciflow/rocm/154864 -> ciflow/rocm/154864 2025-08-14T21:24:06.2259556Z * [new tag] ciflow/rocm/156491 -> ciflow/rocm/156491 2025-08-14T21:24:06.2260306Z * [new tag] ciflow/rocm/158219 -> ciflow/rocm/158219 2025-08-14T21:24:06.2260834Z * [new tag] ciflow/rocm/158220 -> ciflow/rocm/158220 2025-08-14T21:24:06.2261042Z * [new tag] ciflow/rocm/158224 -> ciflow/rocm/158224 2025-08-14T21:24:06.2261388Z * [new tag] ciflow/rocm/159158 -> ciflow/rocm/159158 2025-08-14T21:24:06.2263664Z * [new tag] ciflow/rocm/160215 -> ciflow/rocm/160215 2025-08-14T21:24:06.2263970Z * [new tag] ciflow/rocm/160468 -> ciflow/rocm/160468 2025-08-14T21:24:06.2264121Z * [new tag] ciflow/rocm/160538 -> ciflow/rocm/160538 2025-08-14T21:24:06.2264247Z * [new tag] ciflow/s390/143959 -> ciflow/s390/143959 2025-08-14T21:24:06.2264520Z * [new tag] ciflow/slow/01c7106 -> ciflow/slow/01c7106 2025-08-14T21:24:06.2264779Z * [new tag] ciflow/slow/0577043 -> ciflow/slow/0577043 2025-08-14T21:24:06.2266360Z * [new tag] ciflow/slow/0d5b74da0cab798fbfdb9caa53fad816999c8386-sdym -> ciflow/slow/0d5b74da0cab798fbfdb9caa53fad816999c8386-sdym 2025-08-14T21:24:06.2266830Z * [new tag] ciflow/slow/0e81104 -> ciflow/slow/0e81104 2025-08-14T21:24:06.2267081Z * [new tag] ciflow/slow/154595 -> ciflow/slow/154595 2025-08-14T21:24:06.2267235Z * [new tag] ciflow/slow/1732077 -> ciflow/slow/1732077 2025-08-14T21:24:06.2269316Z * [new tag] ciflow/slow/187eb7c -> ciflow/slow/187eb7c 2025-08-14T21:24:06.2269631Z * [new tag] ciflow/slow/1faef89 -> ciflow/slow/1faef89 2025-08-14T21:24:06.2269784Z * [new tag] ciflow/slow/3920ec1 -> ciflow/slow/3920ec1 2025-08-14T21:24:06.2269921Z * [new tag] ciflow/slow/3b7c6b2 -> ciflow/slow/3b7c6b2 2025-08-14T21:24:06.2271541Z * [new tag] ciflow/slow/59a3759 -> ciflow/slow/59a3759 2025-08-14T21:24:06.2271867Z * [new tag] ciflow/slow/70ef0bb -> ciflow/slow/70ef0bb 2025-08-14T21:24:06.2272312Z * [new tag] ciflow/slow/788ff06 -> ciflow/slow/788ff06 2025-08-14T21:24:06.2272665Z * [new tag] ciflow/slow/8751002215790a3a88750faa8f4366933e296693-sdym -> ciflow/slow/8751002215790a3a88750faa8f4366933e296693-sdym 2025-08-14T21:24:06.2273039Z * [new tag] ciflow/slow/9d85864 -> ciflow/slow/9d85864 2025-08-14T21:24:06.2274220Z * [new tag] ciflow/slow/9ffad5b -> ciflow/slow/9ffad5b 2025-08-14T21:24:06.2274386Z * [new tag] ciflow/slow/a206e8b -> ciflow/slow/a206e8b 2025-08-14T21:24:06.2274849Z * [new tag] ciflow/slow/a837609 -> ciflow/slow/a837609 2025-08-14T21:24:06.2278382Z * [new tag] ciflow/slow/af841f3 -> ciflow/slow/af841f3 2025-08-14T21:24:06.2278910Z * [new tag] ciflow/slow/da3aba1e46157c4df504b067477cdf2b3c96b194-sdym -> ciflow/slow/da3aba1e46157c4df504b067477cdf2b3c96b194-sdym 2025-08-14T21:24:06.2279618Z * [new tag] ciflow/trunk/131296 -> ciflow/trunk/131296 2025-08-14T21:24:06.2279768Z * [new tag] ciflow/trunk/137400 -> ciflow/trunk/137400 2025-08-14T21:24:06.2280051Z * [new tag] ciflow/trunk/138996 -> ciflow/trunk/138996 2025-08-14T21:24:06.2280190Z * [new tag] ciflow/trunk/139971 -> ciflow/trunk/139971 2025-08-14T21:24:06.2280322Z * [new tag] ciflow/trunk/147360 -> ciflow/trunk/147360 2025-08-14T21:24:06.2280445Z * [new tag] ciflow/trunk/147855 -> ciflow/trunk/147855 2025-08-14T21:24:06.2280574Z * [new tag] ciflow/trunk/148180 -> ciflow/trunk/148180 2025-08-14T21:24:06.2280693Z * [new tag] ciflow/trunk/148328 -> ciflow/trunk/148328 2025-08-14T21:24:06.2280960Z * [new tag] ciflow/trunk/148492 -> ciflow/trunk/148492 2025-08-14T21:24:06.2281107Z * [new tag] ciflow/trunk/150282 -> ciflow/trunk/150282 2025-08-14T21:24:06.2282291Z * [new tag] ciflow/trunk/150302 -> ciflow/trunk/150302 2025-08-14T21:24:06.2282622Z * [new tag] ciflow/trunk/151845 -> ciflow/trunk/151845 2025-08-14T21:24:06.2282936Z * [new tag] ciflow/trunk/152624 -> ciflow/trunk/152624 2025-08-14T21:24:06.2285196Z * [new tag] ciflow/trunk/154193 -> ciflow/trunk/154193 2025-08-14T21:24:06.2285517Z * [new tag] ciflow/trunk/154595 -> ciflow/trunk/154595 2025-08-14T21:24:06.2285661Z * [new tag] ciflow/trunk/154650 -> ciflow/trunk/154650 2025-08-14T21:24:06.2285879Z * [new tag] ciflow/trunk/154694 -> ciflow/trunk/154694 2025-08-14T21:24:06.2286012Z * [new tag] ciflow/trunk/155958 -> ciflow/trunk/155958 2025-08-14T21:24:06.2286135Z * [new tag] ciflow/trunk/156049 -> ciflow/trunk/156049 2025-08-14T21:24:06.2286491Z * [new tag] ciflow/trunk/156703 -> ciflow/trunk/156703 2025-08-14T21:24:06.2286792Z * [new tag] ciflow/trunk/156851 -> ciflow/trunk/156851 2025-08-14T21:24:06.2287325Z * [new tag] ciflow/trunk/157148 -> ciflow/trunk/157148 2025-08-14T21:24:06.2287797Z * [new tag] ciflow/trunk/157152 -> ciflow/trunk/157152 2025-08-14T21:24:06.2288208Z * [new tag] ciflow/trunk/157432 -> ciflow/trunk/157432 2025-08-14T21:24:06.2290591Z * [new tag] ciflow/trunk/157685 -> ciflow/trunk/157685 2025-08-14T21:24:06.2290899Z * [new tag] ciflow/trunk/157689 -> ciflow/trunk/157689 2025-08-14T21:24:06.2291048Z * [new tag] ciflow/trunk/157699 -> ciflow/trunk/157699 2025-08-14T21:24:06.2291167Z * [new tag] ciflow/trunk/157813 -> ciflow/trunk/157813 2025-08-14T21:24:06.2291442Z * [new tag] ciflow/trunk/157994 -> ciflow/trunk/157994 2025-08-14T21:24:06.2291700Z * [new tag] ciflow/trunk/158091 -> ciflow/trunk/158091 2025-08-14T21:24:06.2291833Z * [new tag] ciflow/trunk/158104 -> ciflow/trunk/158104 2025-08-14T21:24:06.2291956Z * [new tag] ciflow/trunk/158219 -> ciflow/trunk/158219 2025-08-14T21:24:06.2292313Z * [new tag] ciflow/trunk/158220 -> ciflow/trunk/158220 2025-08-14T21:24:06.2292774Z * [new tag] ciflow/trunk/158224 -> ciflow/trunk/158224 2025-08-14T21:24:06.2294299Z * [new tag] ciflow/trunk/158529 -> ciflow/trunk/158529 2025-08-14T21:24:06.2294469Z * [new tag] ciflow/trunk/158647 -> ciflow/trunk/158647 2025-08-14T21:24:06.2294599Z * [new tag] ciflow/trunk/158810 -> ciflow/trunk/158810 2025-08-14T21:24:06.2294744Z * [new tag] ciflow/trunk/158812 -> ciflow/trunk/158812 2025-08-14T21:24:06.2295314Z * [new tag] ciflow/trunk/158863 -> ciflow/trunk/158863 2025-08-14T21:24:06.2295620Z * [new tag] ciflow/trunk/158864 -> ciflow/trunk/158864 2025-08-14T21:24:06.2296200Z * [new tag] ciflow/trunk/158883 -> ciflow/trunk/158883 2025-08-14T21:24:06.2296650Z * [new tag] ciflow/trunk/158914 -> ciflow/trunk/158914 2025-08-14T21:24:06.2297172Z * [new tag] ciflow/trunk/158965 -> ciflow/trunk/158965 2025-08-14T21:24:06.2297619Z * [new tag] ciflow/trunk/158987 -> ciflow/trunk/158987 2025-08-14T21:24:06.2298368Z * [new tag] ciflow/trunk/159033 -> ciflow/trunk/159033 2025-08-14T21:24:06.2298631Z * [new tag] ciflow/trunk/159140 -> ciflow/trunk/159140 2025-08-14T21:24:06.2299288Z * [new tag] ciflow/trunk/159158 -> ciflow/trunk/159158 2025-08-14T21:24:06.2299600Z * [new tag] ciflow/trunk/159553 -> ciflow/trunk/159553 2025-08-14T21:24:06.2300256Z * [new tag] ciflow/trunk/159562 -> ciflow/trunk/159562 2025-08-14T21:24:06.2304844Z * [new tag] ciflow/trunk/159682 -> ciflow/trunk/159682 2025-08-14T21:24:06.2305001Z * [new tag] ciflow/trunk/159691 -> ciflow/trunk/159691 2025-08-14T21:24:06.2305332Z * [new tag] ciflow/trunk/159842 -> ciflow/trunk/159842 2025-08-14T21:24:06.2305457Z * [new tag] ciflow/trunk/159889 -> ciflow/trunk/159889 2025-08-14T21:24:06.2305582Z * [new tag] ciflow/trunk/159923 -> ciflow/trunk/159923 2025-08-14T21:24:06.2305695Z * [new tag] ciflow/trunk/160004 -> ciflow/trunk/160004 2025-08-14T21:24:06.2305809Z * [new tag] ciflow/trunk/160113 -> ciflow/trunk/160113 2025-08-14T21:24:06.2305945Z * [new tag] ciflow/trunk/160161 -> ciflow/trunk/160161 2025-08-14T21:24:06.2306142Z * [new tag] ciflow/trunk/160168 -> ciflow/trunk/160168 2025-08-14T21:24:06.2306259Z * [new tag] ciflow/trunk/160181 -> ciflow/trunk/160181 2025-08-14T21:24:06.2306377Z * [new tag] ciflow/trunk/160183 -> ciflow/trunk/160183 2025-08-14T21:24:06.2306533Z * [new tag] ciflow/trunk/160190 -> ciflow/trunk/160190 2025-08-14T21:24:06.2306711Z * [new tag] ciflow/trunk/160198 -> ciflow/trunk/160198 2025-08-14T21:24:06.2307629Z * [new tag] ciflow/trunk/160205 -> ciflow/trunk/160205 2025-08-14T21:24:06.2307888Z * [new tag] ciflow/trunk/160219 -> ciflow/trunk/160219 2025-08-14T21:24:06.2308849Z * [new tag] ciflow/trunk/160224 -> ciflow/trunk/160224 2025-08-14T21:24:06.2309076Z * [new tag] ciflow/trunk/160250 -> ciflow/trunk/160250 2025-08-14T21:24:06.2310061Z * [new tag] ciflow/trunk/160253 -> ciflow/trunk/160253 2025-08-14T21:24:06.2310286Z * [new tag] ciflow/trunk/160335 -> ciflow/trunk/160335 2025-08-14T21:24:06.2310415Z * [new tag] ciflow/trunk/160338 -> ciflow/trunk/160338 2025-08-14T21:24:06.2310675Z * [new tag] ciflow/trunk/160383 -> ciflow/trunk/160383 2025-08-14T21:24:06.2312033Z * [new tag] ciflow/trunk/160401 -> ciflow/trunk/160401 2025-08-14T21:24:06.2312186Z * [new tag] ciflow/trunk/160403 -> ciflow/trunk/160403 2025-08-14T21:24:06.2312297Z * [new tag] ciflow/trunk/160430 -> ciflow/trunk/160430 2025-08-14T21:24:06.2312593Z * [new tag] ciflow/trunk/160431 -> ciflow/trunk/160431 2025-08-14T21:24:06.2313369Z * [new tag] ciflow/trunk/160439 -> ciflow/trunk/160439 2025-08-14T21:24:06.2313555Z * [new tag] ciflow/trunk/160449 -> ciflow/trunk/160449 2025-08-14T21:24:06.2316067Z * [new tag] ciflow/trunk/160454 -> ciflow/trunk/160454 2025-08-14T21:24:06.2316227Z * [new tag] ciflow/trunk/160468 -> ciflow/trunk/160468 2025-08-14T21:24:06.2316345Z * [new tag] ciflow/trunk/160481 -> ciflow/trunk/160481 2025-08-14T21:24:06.2316467Z * [new tag] ciflow/trunk/160485 -> ciflow/trunk/160485 2025-08-14T21:24:06.2316579Z * [new tag] ciflow/trunk/160519 -> ciflow/trunk/160519 2025-08-14T21:24:06.2316692Z * [new tag] ciflow/trunk/160527 -> ciflow/trunk/160527 2025-08-14T21:24:06.2317070Z * [new tag] ciflow/trunk/160560 -> ciflow/trunk/160560 2025-08-14T21:24:06.2317502Z * [new tag] ciflow/trunk/160578 -> ciflow/trunk/160578 2025-08-14T21:24:06.2317959Z * [new tag] ciflow/trunk/160589 -> ciflow/trunk/160589 2025-08-14T21:24:06.2318421Z * [new tag] ciflow/trunk/160592 -> ciflow/trunk/160592 2025-08-14T21:24:06.2318851Z * [new tag] ciflow/trunk/160649 -> ciflow/trunk/160649 2025-08-14T21:24:06.2319382Z * [new tag] ciflow/trunk/160656 -> ciflow/trunk/160656 2025-08-14T21:24:06.2320728Z * [new tag] ciflow/unstable/123 -> ciflow/unstable/123 2025-08-14T21:24:06.2320971Z * [new tag] ciflow/vllm/160116 -> ciflow/vllm/160116 2025-08-14T21:24:06.2321099Z * [new tag] ciflow/vllm/160583 -> ciflow/vllm/160583 2025-08-14T21:24:06.2321497Z * [new tag] ciflow/vllm/160619 -> ciflow/vllm/160619 2025-08-14T21:24:06.2321911Z * [new tag] ciflow/vllm/160625 -> ciflow/vllm/160625 2025-08-14T21:24:06.2322343Z * [new tag] ciflow/vllm/160627 -> ciflow/vllm/160627 2025-08-14T21:24:06.2323674Z * [new tag] ciflow/win-arm64/156049 -> ciflow/win-arm64/156049 2025-08-14T21:24:06.2324007Z * [new tag] ciflow/win-arm64/158104 -> ciflow/win-arm64/158104 2025-08-14T21:24:06.2324155Z * [new tag] ciflow/win-arm64/159553 -> ciflow/win-arm64/159553 2025-08-14T21:24:06.2324365Z * [new tag] ciflow/win-arm64/159562 -> ciflow/win-arm64/159562 2025-08-14T21:24:06.2324722Z * [new tag] ciflow/win-arm64/159777 -> ciflow/win-arm64/159777 2025-08-14T21:24:06.2325138Z * [new tag] ciflow/win-arm64/159780 -> ciflow/win-arm64/159780 2025-08-14T21:24:06.2325640Z * [new tag] ciflow/win-arm64/159842 -> ciflow/win-arm64/159842 2025-08-14T21:24:06.2326249Z * [new tag] ciflow/win-arm64/160250 -> ciflow/win-arm64/160250 2025-08-14T21:24:06.2326723Z * [new tag] ciflow/win-arm64/160253 -> ciflow/win-arm64/160253 2025-08-14T21:24:06.2327016Z * [new tag] ciflow/win-arm64/160454 -> ciflow/win-arm64/160454 2025-08-14T21:24:06.2327478Z * [new tag] ciflow/win-arm64/160560 -> ciflow/win-arm64/160560 2025-08-14T21:24:06.2327939Z * [new tag] ciflow/xpu/138996 -> ciflow/xpu/138996 2025-08-14T21:24:06.2328269Z * [new tag] ciflow/xpu/139971 -> ciflow/xpu/139971 2025-08-14T21:24:06.2331454Z * [new tag] ciflow/xpu/140972 -> ciflow/xpu/140972 2025-08-14T21:24:06.2331749Z * [new tag] ciflow/xpu/143553 -> ciflow/xpu/143553 2025-08-14T21:24:06.2331893Z * [new tag] ciflow/xpu/156272 -> ciflow/xpu/156272 2025-08-14T21:24:06.2332001Z * [new tag] ciflow/xpu/156812 -> ciflow/xpu/156812 2025-08-14T21:24:06.2332114Z * [new tag] ciflow/xpu/157699 -> ciflow/xpu/157699 2025-08-14T21:24:06.2332232Z * [new tag] ciflow/xpu/157994 -> ciflow/xpu/157994 2025-08-14T21:24:06.2332466Z * [new tag] ciflow/xpu/158336 -> ciflow/xpu/158336 2025-08-14T21:24:06.2332590Z * [new tag] ciflow/xpu/158733 -> ciflow/xpu/158733 2025-08-14T21:24:06.2333225Z * [new tag] ciflow/xpu/159033 -> ciflow/xpu/159033 2025-08-14T21:24:06.2333654Z * [new tag] ciflow/xpu/159118 -> ciflow/xpu/159118 2025-08-14T21:24:06.2333782Z * [new tag] ciflow/xpu/159140 -> ciflow/xpu/159140 2025-08-14T21:24:06.2334349Z * [new tag] ciflow/xpu/159241 -> ciflow/xpu/159241 2025-08-14T21:24:06.2334881Z * [new tag] ciflow/xpu/159473 -> ciflow/xpu/159473 2025-08-14T21:24:06.2335455Z * [new tag] ciflow/xpu/159474 -> ciflow/xpu/159474 2025-08-14T21:24:06.2335811Z * [new tag] ciflow/xpu/159553 -> ciflow/xpu/159553 2025-08-14T21:24:06.2336276Z * [new tag] ciflow/xpu/159944 -> ciflow/xpu/159944 2025-08-14T21:24:06.2337193Z * [new tag] ciflow/xpu/160062 -> ciflow/xpu/160062 2025-08-14T21:24:06.2337493Z * [new tag] ciflow/xpu/160067 -> ciflow/xpu/160067 2025-08-14T21:24:06.2337954Z * [new tag] ciflow/xpu/160158 -> ciflow/xpu/160158 2025-08-14T21:24:06.2338344Z * [new tag] ciflow/xpu/160173 -> ciflow/xpu/160173 2025-08-14T21:24:06.2338879Z * [new tag] ciflow/xpu/160183 -> ciflow/xpu/160183 2025-08-14T21:24:06.2339549Z * [new tag] ciflow/xpu/160301 -> ciflow/xpu/160301 2025-08-14T21:24:06.2340061Z * [new tag] ciflow/xpu/160403 -> ciflow/xpu/160403 2025-08-14T21:24:06.2340321Z * [new tag] ciflow/xpu/160606 -> ciflow/xpu/160606 2025-08-14T21:24:06.2341353Z * [new tag] cslpull75 -> cslpull75 2025-08-14T21:24:06.2341634Z * [new tag] cslpull76 -> cslpull76 2025-08-14T21:24:06.2342270Z * [new tag] cslpull77 -> cslpull77 2025-08-14T21:24:06.2344109Z * [new tag] cslpull78 -> cslpull78 2025-08-14T21:24:06.2344255Z * [new tag] cslpull79 -> cslpull79 2025-08-14T21:24:06.2344707Z * [new tag] cslpull80 -> cslpull80 2025-08-14T21:24:06.2345211Z * [new tag] cslpull81 -> cslpull81 2025-08-14T21:24:06.2345693Z * [new tag] cslpull82 -> cslpull82 2025-08-14T21:24:06.2347884Z * [new tag] cslpull83 -> cslpull83 2025-08-14T21:24:06.2348038Z * [new tag] cslpull84 -> cslpull84 2025-08-14T21:24:06.2348146Z * [new tag] cslpull85 -> cslpull85 2025-08-14T21:24:06.2348508Z * [new tag] cslpull86 -> cslpull86 2025-08-14T21:24:06.2348996Z * [new tag] cslpull87 -> cslpull87 2025-08-14T21:24:06.2349831Z * [new tag] cslpull88 -> cslpull88 2025-08-14T21:24:06.2349981Z * [new tag] cslpull89 -> cslpull89 2025-08-14T21:24:06.2351309Z * [new tag] cslpull90 -> cslpull90 2025-08-14T21:24:06.2351429Z * [new tag] cslpull91 -> cslpull91 2025-08-14T21:24:06.2352688Z * [new tag] cslpull92 -> cslpull92 2025-08-14T21:24:06.2353189Z * [new tag] flight_5 -> flight_5 2025-08-14T21:24:06.2353341Z * [new tag] flight_5.1 -> flight_5.1 2025-08-14T21:24:06.2353893Z * [new tag] flight_5.2 -> flight_5.2 2025-08-14T21:24:06.2354334Z * [new tag] flight_5.3 -> flight_5.3 2025-08-14T21:24:06.2358136Z * [new tag] forpull1 -> forpull1 2025-08-14T21:24:06.2358311Z * [new tag] malfet/tag-2ef5611 -> malfet/tag-2ef5611 2025-08-14T21:24:06.2358444Z * [new tag] malfet/tag-317b1a0 -> malfet/tag-317b1a0 2025-08-14T21:24:06.2358573Z * [new tag] malfet/tag-ec6f767 -> malfet/tag-ec6f767 2025-08-14T21:24:06.2358701Z * [new tag] nightly-binary -> nightly-binary 2025-08-14T21:24:06.2358841Z * [new tag] sqzhang_flight4_plus -> sqzhang_flight4_plus 2025-08-14T21:24:06.2359225Z * [new tag] sqzhang_flight_3 -> sqzhang_flight_3 2025-08-14T21:24:06.2360230Z * [new tag] trunk/01584d2a7d029c9749eb73678cf1dc313cc35df6 -> trunk/01584d2a7d029c9749eb73678cf1dc313cc35df6 2025-08-14T21:24:06.2360485Z * [new tag] trunk/017259f9c65b6fad55fb9597d7077e2543eaae46 -> trunk/017259f9c65b6fad55fb9597d7077e2543eaae46 2025-08-14T21:24:06.2361919Z * [new tag] trunk/01bcf9a40dea937637d2cdd530bed2652510943d -> trunk/01bcf9a40dea937637d2cdd530bed2652510943d 2025-08-14T21:24:06.2362164Z * [new tag] trunk/01f66d08d93365015f4af005a252f439c4d4013a -> trunk/01f66d08d93365015f4af005a252f439c4d4013a 2025-08-14T21:24:06.2362606Z * [new tag] trunk/03b254e49f2d4c092e6ca712e5702cf2895aa47e -> trunk/03b254e49f2d4c092e6ca712e5702cf2895aa47e 2025-08-14T21:24:06.2363104Z * [new tag] trunk/05029ad1c30865d3f7e7fd13384db9d826e563eb -> trunk/05029ad1c30865d3f7e7fd13384db9d826e563eb 2025-08-14T21:24:06.2363566Z * [new tag] trunk/05c19d1acecc01b0d2512364183058a6885b9869 -> trunk/05c19d1acecc01b0d2512364183058a6885b9869 2025-08-14T21:24:06.2364216Z * [new tag] trunk/05c417715f791875fbf28cfc3fc86142de1a3206 -> trunk/05c417715f791875fbf28cfc3fc86142de1a3206 2025-08-14T21:24:06.2364992Z * [new tag] trunk/06824f3c7268bb807a422b663047cd0900ddd126 -> trunk/06824f3c7268bb807a422b663047cd0900ddd126 2025-08-14T21:24:06.2365450Z * [new tag] trunk/077cb389746a7d61cfc018aad2ba29a8aa195610 -> trunk/077cb389746a7d61cfc018aad2ba29a8aa195610 2025-08-14T21:24:06.2365915Z * [new tag] trunk/089c4a1ba007ed4abb3e5e0eafd97b7584566057 -> trunk/089c4a1ba007ed4abb3e5e0eafd97b7584566057 2025-08-14T21:24:06.2366430Z * [new tag] trunk/09381f5dacda7bbbfa361f5df76bde5cd309adc1 -> trunk/09381f5dacda7bbbfa361f5df76bde5cd309adc1 2025-08-14T21:24:06.2366968Z * [new tag] trunk/0bd3af4fb87445f4de3a1f9b823e399c8b3cefde -> trunk/0bd3af4fb87445f4de3a1f9b823e399c8b3cefde 2025-08-14T21:24:06.2367541Z * [new tag] trunk/0d3461bac0fb5177e35152d980b301ea3a0aa2c4 -> trunk/0d3461bac0fb5177e35152d980b301ea3a0aa2c4 2025-08-14T21:24:06.2368052Z * [new tag] trunk/0d40ff3b496e68193bc16d5391fa2e3623709f81 -> trunk/0d40ff3b496e68193bc16d5391fa2e3623709f81 2025-08-14T21:24:06.2368788Z * [new tag] trunk/0d71ca2c46753bb268bfdcf815c14415c122a289 -> trunk/0d71ca2c46753bb268bfdcf815c14415c122a289 2025-08-14T21:24:06.2369198Z * [new tag] trunk/0d88593dd826544c9e7bd4aa615ef86847a78d2b -> trunk/0d88593dd826544c9e7bd4aa615ef86847a78d2b 2025-08-14T21:24:06.2369819Z * [new tag] trunk/0e3e377bd5126cfcc69d70c4d77b352d3404cc11 -> trunk/0e3e377bd5126cfcc69d70c4d77b352d3404cc11 2025-08-14T21:24:06.2370291Z * [new tag] trunk/0f3b10b8eebe68e3c75d473d499b87dfe14a2eca -> trunk/0f3b10b8eebe68e3c75d473d499b87dfe14a2eca 2025-08-14T21:24:06.2373217Z * [new tag] trunk/101276f81b4d2a8c31bfd6796b986d4c1bfdf483 -> trunk/101276f81b4d2a8c31bfd6796b986d4c1bfdf483 2025-08-14T21:24:06.2373509Z * [new tag] trunk/1028c5e2d50e121865bf98307e7c035f549a24b2 -> trunk/1028c5e2d50e121865bf98307e7c035f549a24b2 2025-08-14T21:24:06.2373818Z * [new tag] trunk/10bc36fe840cb3510fab84d2ea22663b76702f1e -> trunk/10bc36fe840cb3510fab84d2ea22663b76702f1e 2025-08-14T21:24:06.2374060Z * [new tag] trunk/10e3514c962b58cbbee994257872a626ff76d51b -> trunk/10e3514c962b58cbbee994257872a626ff76d51b 2025-08-14T21:24:06.2374292Z * [new tag] trunk/1128f4c2a822cbe34a9d966306af15097179ffe1 -> trunk/1128f4c2a822cbe34a9d966306af15097179ffe1 2025-08-14T21:24:06.2374542Z * [new tag] trunk/114a6c40434bfb9cfa5abc30e9e34d81300d743e -> trunk/114a6c40434bfb9cfa5abc30e9e34d81300d743e 2025-08-14T21:24:06.2374795Z * [new tag] trunk/118bc97b14c24ac88a4b0c0750a9e7bf93154c76 -> trunk/118bc97b14c24ac88a4b0c0750a9e7bf93154c76 2025-08-14T21:24:06.2375067Z * [new tag] trunk/1196bb1c2e4d5a7edc09f2260e3034132f0c6c91 -> trunk/1196bb1c2e4d5a7edc09f2260e3034132f0c6c91 2025-08-14T21:24:06.2375675Z * [new tag] trunk/11a3565f1872bbad9c253a127e8d4ce7a1b40ec8 -> trunk/11a3565f1872bbad9c253a127e8d4ce7a1b40ec8 2025-08-14T21:24:06.2376264Z * [new tag] trunk/15e49f61643e4c0eef420f0981609709ef55b848 -> trunk/15e49f61643e4c0eef420f0981609709ef55b848 2025-08-14T21:24:06.2376839Z * [new tag] trunk/16d15445f8bd8740095b23de4af89d757af793ca -> trunk/16d15445f8bd8740095b23de4af89d757af793ca 2025-08-14T21:24:06.2377394Z * [new tag] trunk/178515d0ff6833c8e9221482b2a650ab31e00019 -> trunk/178515d0ff6833c8e9221482b2a650ab31e00019 2025-08-14T21:24:06.2377965Z * [new tag] trunk/182efe31dbe43376e7eef7338356aaf94d5bcabe -> trunk/182efe31dbe43376e7eef7338356aaf94d5bcabe 2025-08-14T21:24:06.2378583Z * [new tag] trunk/194fcfcfbdad0add1a1b695321e31a576058f4cf -> trunk/194fcfcfbdad0add1a1b695321e31a576058f4cf 2025-08-14T21:24:06.2379103Z * [new tag] trunk/195b5c2e27eb8f21cbc8ad1e90f42db5a8cfccca -> trunk/195b5c2e27eb8f21cbc8ad1e90f42db5a8cfccca 2025-08-14T21:24:06.2379885Z * [new tag] trunk/198b5fd2d47fa3d5110ceba6827a3b18e0064014 -> trunk/198b5fd2d47fa3d5110ceba6827a3b18e0064014 2025-08-14T21:24:06.2380442Z * [new tag] trunk/199e9abb6a366bbd27c39d1da7c3123b4eea9b5a -> trunk/199e9abb6a366bbd27c39d1da7c3123b4eea9b5a 2025-08-14T21:24:06.2380955Z * [new tag] trunk/19b4283884b2d9b3a0eb364da10b1540d14ab7a7 -> trunk/19b4283884b2d9b3a0eb364da10b1540d14ab7a7 2025-08-14T21:24:06.2382008Z * [new tag] trunk/1c2587119152cec3905647a47c65d3d26619c5a8 -> trunk/1c2587119152cec3905647a47c65d3d26619c5a8 2025-08-14T21:24:06.2382273Z * [new tag] trunk/1c26c53851c212a7c90a325549a72f0571613a8c -> trunk/1c26c53851c212a7c90a325549a72f0571613a8c 2025-08-14T21:24:06.2382909Z * [new tag] trunk/1c2cba17eab2b09d87142883da2bdbdbcf018613 -> trunk/1c2cba17eab2b09d87142883da2bdbdbcf018613 2025-08-14T21:24:06.2383436Z * [new tag] trunk/1d80d697a269234b47ec7ede192faf3bb9b159e3 -> trunk/1d80d697a269234b47ec7ede192faf3bb9b159e3 2025-08-14T21:24:06.2383976Z * [new tag] trunk/1ea688f9a2602fbcde32c0302b822526ca4219dc -> trunk/1ea688f9a2602fbcde32c0302b822526ca4219dc 2025-08-14T21:24:06.2384855Z * [new tag] trunk/1f4057c11ac941fb324386ca594d0a6882185aad -> trunk/1f4057c11ac941fb324386ca594d0a6882185aad 2025-08-14T21:24:06.2385171Z * [new tag] trunk/1fc683cf17c8c673044538d10266c00f92987be2 -> trunk/1fc683cf17c8c673044538d10266c00f92987be2 2025-08-14T21:24:06.2385769Z * [new tag] trunk/1febab2a89302464f6c7d69cfbef7a24c421ea65 -> trunk/1febab2a89302464f6c7d69cfbef7a24c421ea65 2025-08-14T21:24:06.2386290Z * [new tag] trunk/206c1eef6571f906c2792d899a09136b3fce9673 -> trunk/206c1eef6571f906c2792d899a09136b3fce9673 2025-08-14T21:24:06.2387027Z * [new tag] trunk/20bdabbb3c5d6b118a94b2e045c777662563d5bb -> trunk/20bdabbb3c5d6b118a94b2e045c777662563d5bb 2025-08-14T21:24:06.2387440Z * [new tag] trunk/21392c0e06ac2b2621950455975ca6332f0bf641 -> trunk/21392c0e06ac2b2621950455975ca6332f0bf641 2025-08-14T21:24:06.2387943Z * [new tag] trunk/2247aa6d1d43e256255f5c74a781c3190a4387b6 -> trunk/2247aa6d1d43e256255f5c74a781c3190a4387b6 2025-08-14T21:24:06.2388343Z * [new tag] trunk/2259dbed4e0d3f2a8174b5847fd0741aed42451d -> trunk/2259dbed4e0d3f2a8174b5847fd0741aed42451d 2025-08-14T21:24:06.2389014Z * [new tag] trunk/231c72240d80091f099c95e326d3600cba866eee -> trunk/231c72240d80091f099c95e326d3600cba866eee 2025-08-14T21:24:06.2389478Z * [new tag] trunk/24257f5bfaa37795f74d9f64c1b43584128d4b8c -> trunk/24257f5bfaa37795f74d9f64c1b43584128d4b8c 2025-08-14T21:24:06.2390743Z * [new tag] trunk/24f43d0da7ad9c6e95a09a2fee610387728cc1cd -> trunk/24f43d0da7ad9c6e95a09a2fee610387728cc1cd 2025-08-14T21:24:06.2391078Z * [new tag] trunk/2898d3f965e5cd9d02fc2ecdab7c580fd457fea9 -> trunk/2898d3f965e5cd9d02fc2ecdab7c580fd457fea9 2025-08-14T21:24:06.2391572Z * [new tag] trunk/28ccc9e7247798980fe00a11bcd64a8016b5f227 -> trunk/28ccc9e7247798980fe00a11bcd64a8016b5f227 2025-08-14T21:24:06.2392128Z * [new tag] trunk/29712314dd5cf500a8ea3d1c69483a3cb768ca72 -> trunk/29712314dd5cf500a8ea3d1c69483a3cb768ca72 2025-08-14T21:24:06.2392684Z * [new tag] trunk/29d20d49f0b7f4e362e1cefdcdc4b5659969312c -> trunk/29d20d49f0b7f4e362e1cefdcdc4b5659969312c 2025-08-14T21:24:06.2393983Z * [new tag] trunk/2c5e10a5fceb208b11c3d569ae02e348b5893b31 -> trunk/2c5e10a5fceb208b11c3d569ae02e348b5893b31 2025-08-14T21:24:06.2394309Z * [new tag] trunk/2d0cdee394bccadcd0abe19dd4623ed978a331ad -> trunk/2d0cdee394bccadcd0abe19dd4623ed978a331ad 2025-08-14T21:24:06.2394879Z * [new tag] trunk/2e4e5ab4be9e0aeffd9c49b5b2f9f820bd0895b1 -> trunk/2e4e5ab4be9e0aeffd9c49b5b2f9f820bd0895b1 2025-08-14T21:24:06.2395527Z * [new tag] trunk/2ea40fba841b3af8103f332ba62e54f350ba9a51 -> trunk/2ea40fba841b3af8103f332ba62e54f350ba9a51 2025-08-14T21:24:06.2396029Z * [new tag] trunk/2ee22e435131369a7e4f8cc4732579acc29a941b -> trunk/2ee22e435131369a7e4f8cc4732579acc29a941b 2025-08-14T21:24:06.2396586Z * [new tag] trunk/2f4c2226175512af787725c4d5ad7313c60d4db1 -> trunk/2f4c2226175512af787725c4d5ad7313c60d4db1 2025-08-14T21:24:06.2397010Z * [new tag] trunk/3008d985a8fc155eb89374afff50cb33a6bd10d5 -> trunk/3008d985a8fc155eb89374afff50cb33a6bd10d5 2025-08-14T21:24:06.2397584Z * [new tag] trunk/3028fa6ce9d9c96671722ab8213a1a30670d7cf2 -> trunk/3028fa6ce9d9c96671722ab8213a1a30670d7cf2 2025-08-14T21:24:06.2398280Z * [new tag] trunk/303c614f3df95ae2b659c5f6c1838b14e4776ce6 -> trunk/303c614f3df95ae2b659c5f6c1838b14e4776ce6 2025-08-14T21:24:06.2399708Z * [new tag] trunk/305fa2239365ad17ac9c534a68bba8a149c42d67 -> trunk/305fa2239365ad17ac9c534a68bba8a149c42d67 2025-08-14T21:24:06.2399965Z * [new tag] trunk/31c9ac4319c0cc2ed8c6be701c6ccf73f6cb4706 -> trunk/31c9ac4319c0cc2ed8c6be701c6ccf73f6cb4706 2025-08-14T21:24:06.2400352Z * [new tag] trunk/32099961d588fc19ead8afe805d6b5108de75669 -> trunk/32099961d588fc19ead8afe805d6b5108de75669 2025-08-14T21:24:06.2400908Z * [new tag] trunk/32e5e2f596d55bb9441d5d53f3c58bcb55828047 -> trunk/32e5e2f596d55bb9441d5d53f3c58bcb55828047 2025-08-14T21:24:06.2401459Z * [new tag] trunk/334b38ccc4427b1d14981c48a3a0b92180d58225 -> trunk/334b38ccc4427b1d14981c48a3a0b92180d58225 2025-08-14T21:24:06.2401980Z * [new tag] trunk/334ecbd4ffe11858cae7d23d1190ddb4777c2513 -> trunk/334ecbd4ffe11858cae7d23d1190ddb4777c2513 2025-08-14T21:24:06.2402619Z * [new tag] trunk/33d94018668951611b318b7515ae96f04e48eac0 -> trunk/33d94018668951611b318b7515ae96f04e48eac0 2025-08-14T21:24:06.2403037Z * [new tag] trunk/34358f335d95213d96b6cca6a83e7bf3af6a9fcb -> trunk/34358f335d95213d96b6cca6a83e7bf3af6a9fcb 2025-08-14T21:24:06.2403674Z * [new tag] trunk/34ec5ed275f8aa875c80daa97b3e82af0b06f673 -> trunk/34ec5ed275f8aa875c80daa97b3e82af0b06f673 2025-08-14T21:24:06.2404177Z * [new tag] trunk/355462e1278d818deb9ef4a184073d5b66074816 -> trunk/355462e1278d818deb9ef4a184073d5b66074816 2025-08-14T21:24:06.2411040Z * [new tag] trunk/3626ba711b34397d1fbf0a9b1979f85cbf68b919 -> trunk/3626ba711b34397d1fbf0a9b1979f85cbf68b919 2025-08-14T21:24:06.2411537Z * [new tag] trunk/36f46d082a4954921cb8493223f000f2aab79ed7 -> trunk/36f46d082a4954921cb8493223f000f2aab79ed7 2025-08-14T21:24:06.2411778Z * [new tag] trunk/39aa3d1471549b7829c207d634dfdc1d26e346a2 -> trunk/39aa3d1471549b7829c207d634dfdc1d26e346a2 2025-08-14T21:24:06.2412013Z * [new tag] trunk/3a562374401113187ce2566b87e3f1d87d7c53aa -> trunk/3a562374401113187ce2566b87e3f1d87d7c53aa 2025-08-14T21:24:06.2412261Z * [new tag] trunk/3ac86e728dfaa7383ff7f865e9e7d33486188dae -> trunk/3ac86e728dfaa7383ff7f865e9e7d33486188dae 2025-08-14T21:24:06.2412525Z * [new tag] trunk/3be70dc30e893b552fc0f23ca06cd8f7949b6d08 -> trunk/3be70dc30e893b552fc0f23ca06cd8f7949b6d08 2025-08-14T21:24:06.2412776Z * [new tag] trunk/3cec82a7e9aea040a34dd7a2587ae6d3bd65dba0 -> trunk/3cec82a7e9aea040a34dd7a2587ae6d3bd65dba0 2025-08-14T21:24:06.2413018Z * [new tag] trunk/3cf7b4024ef83e44e9ae223dbff7c7ab68240cb2 -> trunk/3cf7b4024ef83e44e9ae223dbff7c7ab68240cb2 2025-08-14T21:24:06.2413275Z * [new tag] trunk/3ef2e1ef769582a82c6ddf150e9d11bf4bf1c44f -> trunk/3ef2e1ef769582a82c6ddf150e9d11bf4bf1c44f 2025-08-14T21:24:06.2413524Z * [new tag] trunk/3f1636ebef9b45e8a3cb0eb20d327ee6acb74be0 -> trunk/3f1636ebef9b45e8a3cb0eb20d327ee6acb74be0 2025-08-14T21:24:06.2413918Z * [new tag] trunk/3faee0a6318afcbbbb48687009a459214910d820 -> trunk/3faee0a6318afcbbbb48687009a459214910d820 2025-08-14T21:24:06.2414606Z * [new tag] trunk/3fcd79e023da7156ac584992ebab29205d3b7881 -> trunk/3fcd79e023da7156ac584992ebab29205d3b7881 2025-08-14T21:24:06.2415097Z * [new tag] trunk/3fe19a7a0af3f4d692af30476c320be18c7e8ae6 -> trunk/3fe19a7a0af3f4d692af30476c320be18c7e8ae6 2025-08-14T21:24:06.2415715Z * [new tag] trunk/41673110cd7c5960824cc74a6fcaeda1a8bc7a23 -> trunk/41673110cd7c5960824cc74a6fcaeda1a8bc7a23 2025-08-14T21:24:06.2416239Z * [new tag] trunk/4183d4ff3dcc1d87400326a9a7998c3f9e966f60 -> trunk/4183d4ff3dcc1d87400326a9a7998c3f9e966f60 2025-08-14T21:24:06.2416809Z * [new tag] trunk/422bd6808bb98cbbac31d157d9c82ad11ba9732d -> trunk/422bd6808bb98cbbac31d157d9c82ad11ba9732d 2025-08-14T21:24:06.2417358Z * [new tag] trunk/42e51cd4b3973a053fcfa80878a3f346fd158e9f -> trunk/42e51cd4b3973a053fcfa80878a3f346fd158e9f 2025-08-14T21:24:06.2418011Z * [new tag] trunk/4416433c7c625127b7f975c92f8ec98ea4c67fd3 -> trunk/4416433c7c625127b7f975c92f8ec98ea4c67fd3 2025-08-14T21:24:06.2418569Z * [new tag] trunk/45ba7ecda876685b083cbbe932450560c566826b -> trunk/45ba7ecda876685b083cbbe932450560c566826b 2025-08-14T21:24:06.2419850Z * [new tag] trunk/47a1db823dfcdacdb99f317428fc3791a18c5812 -> trunk/47a1db823dfcdacdb99f317428fc3791a18c5812 2025-08-14T21:24:06.2420126Z * [new tag] trunk/4a773e1e867f28a8ff0b15203e5cd9548f74fcee -> trunk/4a773e1e867f28a8ff0b15203e5cd9548f74fcee 2025-08-14T21:24:06.2420716Z * [new tag] trunk/4a90dc0c1f68d1f98832b169f792ed1bb195a0f3 -> trunk/4a90dc0c1f68d1f98832b169f792ed1bb195a0f3 2025-08-14T21:24:06.2421397Z * [new tag] trunk/4cde0acc0e4e795e1a12cbdd9b93c8c04c1fa05d -> trunk/4cde0acc0e4e795e1a12cbdd9b93c8c04c1fa05d 2025-08-14T21:24:06.2422000Z * [new tag] trunk/4d419a74610c32b1372f8802dcc61893740a23cf -> trunk/4d419a74610c32b1372f8802dcc61893740a23cf 2025-08-14T21:24:06.2422547Z * [new tag] trunk/4d5b3f2d5af7c8e4f41da4ffca53fafe8bb86235 -> trunk/4d5b3f2d5af7c8e4f41da4ffca53fafe8bb86235 2025-08-14T21:24:06.2424036Z * [new tag] trunk/4e2ddb5db67617f9f5309c8bba0c17adc84cadbc -> trunk/4e2ddb5db67617f9f5309c8bba0c17adc84cadbc 2025-08-14T21:24:06.2424655Z * [new tag] trunk/50a8c118754a6c5a46968f5c8e215ccba6831d42 -> trunk/50a8c118754a6c5a46968f5c8e215ccba6831d42 2025-08-14T21:24:06.2424937Z * [new tag] trunk/50f23ff6f883db5021dd6bab4c146434f98dd15d -> trunk/50f23ff6f883db5021dd6bab4c146434f98dd15d 2025-08-14T21:24:06.2425355Z * [new tag] trunk/515cb70367e84fcbad23fcc5b39eb1d7706df2aa -> trunk/515cb70367e84fcbad23fcc5b39eb1d7706df2aa 2025-08-14T21:24:06.2425769Z * [new tag] trunk/53e39494958b7e2278cc8176f63636e812e8945f -> trunk/53e39494958b7e2278cc8176f63636e812e8945f 2025-08-14T21:24:06.2426279Z * [new tag] trunk/556e2a73f4f0643f7c2aeb5c7dddda43388a40ce -> trunk/556e2a73f4f0643f7c2aeb5c7dddda43388a40ce 2025-08-14T21:24:06.2426892Z * [new tag] trunk/5665dc9ab76b84d7c90d845ffb0f6349b3621919 -> trunk/5665dc9ab76b84d7c90d845ffb0f6349b3621919 2025-08-14T21:24:06.2427498Z * [new tag] trunk/566c6d52ef1411c8262d7b9cf85e2044fdfbe1a3 -> trunk/566c6d52ef1411c8262d7b9cf85e2044fdfbe1a3 2025-08-14T21:24:06.2427972Z * [new tag] trunk/56c828bef93eada0e18d2cc013207831ca80cc99 -> trunk/56c828bef93eada0e18d2cc013207831ca80cc99 2025-08-14T21:24:06.2428568Z * [new tag] trunk/5737372862253a0ac0292407a5844796f02380ad -> trunk/5737372862253a0ac0292407a5844796f02380ad 2025-08-14T21:24:06.2429152Z * [new tag] trunk/57f738b6357cc8fcdde479a0948e723809a1a44d -> trunk/57f738b6357cc8fcdde479a0948e723809a1a44d 2025-08-14T21:24:06.2429738Z * [new tag] trunk/5a40c5784482255b9baf14086cc4b9349fc6d512 -> trunk/5a40c5784482255b9baf14086cc4b9349fc6d512 2025-08-14T21:24:06.2430269Z * [new tag] trunk/5a9c4cfce42b9eb87da0de40c5633f083115c307 -> trunk/5a9c4cfce42b9eb87da0de40c5633f083115c307 2025-08-14T21:24:06.2431097Z * [new tag] trunk/5ace061254af71aa83d1baae81aa1864c9746add -> trunk/5ace061254af71aa83d1baae81aa1864c9746add 2025-08-14T21:24:06.2431451Z * [new tag] trunk/5dddcd5b07c6644efca8d613f4eca1dc95daa87f -> trunk/5dddcd5b07c6644efca8d613f4eca1dc95daa87f 2025-08-14T21:24:06.2432070Z * [new tag] trunk/5ed4f9177907fe403ec4c4499d0d0e9be6b68fcf -> trunk/5ed4f9177907fe403ec4c4499d0d0e9be6b68fcf 2025-08-14T21:24:06.2432691Z * [new tag] trunk/5f1010fbb3850d99c8fdf9a9de2f79260cdc586a -> trunk/5f1010fbb3850d99c8fdf9a9de2f79260cdc586a 2025-08-14T21:24:06.2433079Z * [new tag] trunk/5f5f508aa836a46dfe88857fb223049616b94e93 -> trunk/5f5f508aa836a46dfe88857fb223049616b94e93 2025-08-14T21:24:06.2433931Z * [new tag] trunk/62bac0798100e0e06a86b7a4cee1788413e3d0ca -> trunk/62bac0798100e0e06a86b7a4cee1788413e3d0ca 2025-08-14T21:24:06.2434240Z * [new tag] trunk/63654ba4c5178fd12220cfc9d1c878af2fdd07cc -> trunk/63654ba4c5178fd12220cfc9d1c878af2fdd07cc 2025-08-14T21:24:06.2434852Z * [new tag] trunk/639778b3ee3b80e0894367fdc4442b58ae1b3a62 -> trunk/639778b3ee3b80e0894367fdc4442b58ae1b3a62 2025-08-14T21:24:06.2436120Z * [new tag] trunk/641ee7478150f26969968f49d8b358e199679a8a -> trunk/641ee7478150f26969968f49d8b358e199679a8a 2025-08-14T21:24:06.2436487Z * [new tag] trunk/65053c03a3d209060cb239d20a229dac37cf9dd1 -> trunk/65053c03a3d209060cb239d20a229dac37cf9dd1 2025-08-14T21:24:06.2436845Z * [new tag] trunk/652a6f5954d039d61dc6e6575ccf89d385d74537 -> trunk/652a6f5954d039d61dc6e6575ccf89d385d74537 2025-08-14T21:24:06.2437373Z * [new tag] trunk/685f15dbea66e8ffa8564752f81ad2f6cb447a14 -> trunk/685f15dbea66e8ffa8564752f81ad2f6cb447a14 2025-08-14T21:24:06.2437892Z * [new tag] trunk/68a4b4b2e336cfd4451ce6546d900568e5ddf96c -> trunk/68a4b4b2e336cfd4451ce6546d900568e5ddf96c 2025-08-14T21:24:06.2438609Z * [new tag] trunk/69a0a9aa7f5e320a02e97fa789d2f72baff1554f -> trunk/69a0a9aa7f5e320a02e97fa789d2f72baff1554f 2025-08-14T21:24:06.2439155Z * [new tag] trunk/6be6d06295c870c77a6eb69f96b3170d983520d5 -> trunk/6be6d06295c870c77a6eb69f96b3170d983520d5 2025-08-14T21:24:06.2442009Z * [new tag] trunk/6c05ea6475beaf3acc05e1bda0f3f8fe3bdc1d49 -> trunk/6c05ea6475beaf3acc05e1bda0f3f8fe3bdc1d49 2025-08-14T21:24:06.2442462Z * [new tag] trunk/6da11d9aafc0d84dc7f66030c181608ff2614f66 -> trunk/6da11d9aafc0d84dc7f66030c181608ff2614f66 2025-08-14T21:24:06.2442799Z * [new tag] trunk/6e8865fbc161270e2ffc52817e6c667df417a3f7 -> trunk/6e8865fbc161270e2ffc52817e6c667df417a3f7 2025-08-14T21:24:06.2443136Z * [new tag] trunk/6ea8376f84232048d6be0f7b2edf82aec1b61d58 -> trunk/6ea8376f84232048d6be0f7b2edf82aec1b61d58 2025-08-14T21:24:06.2443770Z * [new tag] trunk/6ee175195ac7853734d64704171993cc6265eb38 -> trunk/6ee175195ac7853734d64704171993cc6265eb38 2025-08-14T21:24:06.2444072Z * [new tag] trunk/6f0f4e0c3eacd479864319127915f869f64e1935 -> trunk/6f0f4e0c3eacd479864319127915f869f64e1935 2025-08-14T21:24:06.2444351Z * [new tag] trunk/70ccdec44b89e355a2cb03ba14a634284f7750f8 -> trunk/70ccdec44b89e355a2cb03ba14a634284f7750f8 2025-08-14T21:24:06.2444665Z * [new tag] trunk/72009ec6bebca7714f99c18449183787f202af4d -> trunk/72009ec6bebca7714f99c18449183787f202af4d 2025-08-14T21:24:06.2445237Z * [new tag] trunk/731ee31f7b6ba19307daab323f6196172b71aaf8 -> trunk/731ee31f7b6ba19307daab323f6196172b71aaf8 2025-08-14T21:24:06.2445905Z * [new tag] trunk/76a0609b6bddb2bc40f1eb4ade12885023653d59 -> trunk/76a0609b6bddb2bc40f1eb4ade12885023653d59 2025-08-14T21:24:06.2446334Z * [new tag] trunk/781e9a7724c47496e3d38a81e6dd6194cf098c41 -> trunk/781e9a7724c47496e3d38a81e6dd6194cf098c41 2025-08-14T21:24:06.2447122Z * [new tag] trunk/78a2fe1d42edeaa2ef7020b0fa0ac82ee4a640e4 -> trunk/78a2fe1d42edeaa2ef7020b0fa0ac82ee4a640e4 2025-08-14T21:24:06.2447587Z * [new tag] trunk/7a974a88f2c529a614baeabe4debd00fc8a3b299 -> trunk/7a974a88f2c529a614baeabe4debd00fc8a3b299 2025-08-14T21:24:06.2448530Z * [new tag] trunk/7ae0629d64b404e0ef5d9c931433ad25e65d6114 -> trunk/7ae0629d64b404e0ef5d9c931433ad25e65d6114 2025-08-14T21:24:06.2448965Z * [new tag] trunk/7d2ec704e47f4b740cdecda5534b305e8e1875ef -> trunk/7d2ec704e47f4b740cdecda5534b305e8e1875ef 2025-08-14T21:24:06.2449805Z * [new tag] trunk/7d87e358ac8440f666fabbfd99058bb5342be6ac -> trunk/7d87e358ac8440f666fabbfd99058bb5342be6ac 2025-08-14T21:24:06.2450225Z * [new tag] trunk/7e27347fd353928c99620495c8c531a5eba7d56b -> trunk/7e27347fd353928c99620495c8c531a5eba7d56b 2025-08-14T21:24:06.2451074Z * [new tag] trunk/7e91394955721c77645fcdb75a5d47a255d65020 -> trunk/7e91394955721c77645fcdb75a5d47a255d65020 2025-08-14T21:24:06.2451627Z * [new tag] trunk/7f4cb4a3e018a621add2a37a3a2f67b982d51001 -> trunk/7f4cb4a3e018a621add2a37a3a2f67b982d51001 2025-08-14T21:24:06.2452198Z * [new tag] trunk/7fbc22855c17741ae016992803b2e147a13aa22d -> trunk/7fbc22855c17741ae016992803b2e147a13aa22d 2025-08-14T21:24:06.2453245Z * [new tag] trunk/8047421fbb607d70ede13b9cd5a60b7b8bdfe348 -> trunk/8047421fbb607d70ede13b9cd5a60b7b8bdfe348 2025-08-14T21:24:06.2453478Z * [new tag] trunk/8088cfa592504a2897b4c78f8a46fe658ab5c2c2 -> trunk/8088cfa592504a2897b4c78f8a46fe658ab5c2c2 2025-08-14T21:24:06.2454127Z * [new tag] trunk/80cca8307943ba64168208b54028f55b2c71daff -> trunk/80cca8307943ba64168208b54028f55b2c71daff 2025-08-14T21:24:06.2454780Z * [new tag] trunk/8147370733bbdcd034cad54e9212e51885a11892 -> trunk/8147370733bbdcd034cad54e9212e51885a11892 2025-08-14T21:24:06.2455490Z * [new tag] trunk/83875cdb5594ccb3c9206b8eb5745fe1d011cf26 -> trunk/83875cdb5594ccb3c9206b8eb5745fe1d011cf26 2025-08-14T21:24:06.2455907Z * [new tag] trunk/8399cf88ce8399d2be93355f29d4cb69f51c0654 -> trunk/8399cf88ce8399d2be93355f29d4cb69f51c0654 2025-08-14T21:24:06.2457015Z * [new tag] trunk/842cc77ab9aafd518593c2fce077d6abb42a5b7f -> trunk/842cc77ab9aafd518593c2fce077d6abb42a5b7f 2025-08-14T21:24:06.2457256Z * [new tag] trunk/85db508af533649d0b3447ff3f0d5fe083150c84 -> trunk/85db508af533649d0b3447ff3f0d5fe083150c84 2025-08-14T21:24:06.2457700Z * [new tag] trunk/86eb65f7f06016bcd5d7951dc9d74bc3993a827a -> trunk/86eb65f7f06016bcd5d7951dc9d74bc3993a827a 2025-08-14T21:24:06.2458411Z * [new tag] trunk/87e6c4079d8ec7d04aff00ed82096b39836a8367 -> trunk/87e6c4079d8ec7d04aff00ed82096b39836a8367 2025-08-14T21:24:06.2458892Z * [new tag] trunk/89654db1abccf7e5f261989a150db4d1619ea2aa -> trunk/89654db1abccf7e5f261989a150db4d1619ea2aa 2025-08-14T21:24:06.2459419Z * [new tag] trunk/8a37f0c90392a2c38b7c5955471fa49edcaf5cb1 -> trunk/8a37f0c90392a2c38b7c5955471fa49edcaf5cb1 2025-08-14T21:24:06.2460249Z * [new tag] trunk/8ab5868a2199fe485c2d66533b9244ccb97e487d -> trunk/8ab5868a2199fe485c2d66533b9244ccb97e487d 2025-08-14T21:24:06.2460751Z * [new tag] trunk/8ae4d2652f64b8444b3d5314b9232bd2119bcde6 -> trunk/8ae4d2652f64b8444b3d5314b9232bd2119bcde6 2025-08-14T21:24:06.2461412Z * [new tag] trunk/8c41cb800ae0411f02ea5da34bd5ccc3790633b0 -> trunk/8c41cb800ae0411f02ea5da34bd5ccc3790633b0 2025-08-14T21:24:06.2462061Z * [new tag] trunk/8cb91e20bc205b1416648d0ffd98d1ba1f3a6fc4 -> trunk/8cb91e20bc205b1416648d0ffd98d1ba1f3a6fc4 2025-08-14T21:24:06.2462618Z * [new tag] trunk/8cfaf51d4e29c9bd9f49ecc98d955ed53df1a13d -> trunk/8cfaf51d4e29c9bd9f49ecc98d955ed53df1a13d 2025-08-14T21:24:06.2463116Z * [new tag] trunk/8d1cf529229dce7cd5ea04abb0faac83b87ca6d1 -> trunk/8d1cf529229dce7cd5ea04abb0faac83b87ca6d1 2025-08-14T21:24:06.2463850Z * [new tag] trunk/8d3d1c844303cb1d46123a1caa76d4cf83973347 -> trunk/8d3d1c844303cb1d46123a1caa76d4cf83973347 2025-08-14T21:24:06.2464345Z * [new tag] trunk/8d6d3246316e1767a57d5e855acd6208da753b75 -> trunk/8d6d3246316e1767a57d5e855acd6208da753b75 2025-08-14T21:24:06.2464931Z * [new tag] trunk/8e6a3138581152ab827a0997f34c470271399f5e -> trunk/8e6a3138581152ab827a0997f34c470271399f5e 2025-08-14T21:24:06.2465505Z * [new tag] trunk/8eee08d2279b98af2522debb6512d37e837e89e3 -> trunk/8eee08d2279b98af2522debb6512d37e837e89e3 2025-08-14T21:24:06.2466162Z * [new tag] trunk/90b78ee50f73b5c963996076a3d54b74b1b965be -> trunk/90b78ee50f73b5c963996076a3d54b74b1b965be 2025-08-14T21:24:06.2466550Z * [new tag] trunk/94b91a876327820a4bb6f5d39d156f13f2553ab6 -> trunk/94b91a876327820a4bb6f5d39d156f13f2553ab6 2025-08-14T21:24:06.2467788Z * [new tag] trunk/95210cc409dd578988c7116b47725c304dea54c7 -> trunk/95210cc409dd578988c7116b47725c304dea54c7 2025-08-14T21:24:06.2468025Z * [new tag] trunk/96bd33b2de79598566df395f32e27c4d33673f05 -> trunk/96bd33b2de79598566df395f32e27c4d33673f05 2025-08-14T21:24:06.2468848Z * [new tag] trunk/9708fcf92db88b80b9010c68662d634434da3106 -> trunk/9708fcf92db88b80b9010c68662d634434da3106 2025-08-14T21:24:06.2469293Z * [new tag] trunk/97c8c98f8dcb9c5c188b691d156e0043dba6c7f8 -> trunk/97c8c98f8dcb9c5c188b691d156e0043dba6c7f8 2025-08-14T21:24:06.2469938Z * [new tag] trunk/9903ca4f70bdc1653016256f5b4fd74fdfc609f8 -> trunk/9903ca4f70bdc1653016256f5b4fd74fdfc609f8 2025-08-14T21:24:06.2470601Z * [new tag] trunk/99bc2f94c1955657e950ebdad5f77e518785ccbd -> trunk/99bc2f94c1955657e950ebdad5f77e518785ccbd 2025-08-14T21:24:06.2471126Z * [new tag] trunk/9a06e6d0310da9d8a59ae05e8ec9c0201b55cacd -> trunk/9a06e6d0310da9d8a59ae05e8ec9c0201b55cacd 2025-08-14T21:24:06.2471666Z * [new tag] trunk/9a0f7a3bb01b235ea04581ee540970a098071b72 -> trunk/9a0f7a3bb01b235ea04581ee540970a098071b72 2025-08-14T21:24:06.2472328Z * [new tag] trunk/9b803cdbe298009f08340c1aaccb25aafbca95d8 -> trunk/9b803cdbe298009f08340c1aaccb25aafbca95d8 2025-08-14T21:24:06.2473061Z * [new tag] trunk/9ccd0f5e31ea54fcf42101dfbaacc103494e34df -> trunk/9ccd0f5e31ea54fcf42101dfbaacc103494e34df 2025-08-14T21:24:06.2473553Z * [new tag] trunk/9d37c960a4fc44d5ac334ca8bf775f85b95d76fc -> trunk/9d37c960a4fc44d5ac334ca8bf775f85b95d76fc 2025-08-14T21:24:06.2474235Z * [new tag] trunk/9e07673deb212c87b1c6fea23799a97474c476ed -> trunk/9e07673deb212c87b1c6fea23799a97474c476ed 2025-08-14T21:24:06.2474664Z * [new tag] trunk/9eedd2a20b64302d0d116ea2802b50948d2ebb09 -> trunk/9eedd2a20b64302d0d116ea2802b50948d2ebb09 2025-08-14T21:24:06.2475992Z * [new tag] trunk/9fa8ce26cf638504469852cbc3e7d04579fc8674 -> trunk/9fa8ce26cf638504469852cbc3e7d04579fc8674 2025-08-14T21:24:06.2476244Z * [new tag] trunk/a06ec54d40013c97fbffc174ea8f524ea5a95715 -> trunk/a06ec54d40013c97fbffc174ea8f524ea5a95715 2025-08-14T21:24:06.2477069Z * [new tag] trunk/a288b15ea9f87ddd665f249d492e0fb0861f5a69 -> trunk/a288b15ea9f87ddd665f249d492e0fb0861f5a69 2025-08-14T21:24:06.2477560Z * [new tag] trunk/a2fd106d670bb4990cebfd00f25ecbae4145e76c -> trunk/a2fd106d670bb4990cebfd00f25ecbae4145e76c 2025-08-14T21:24:06.2478556Z * [new tag] trunk/a354fa91e26b376d96385a2206c5ff5b42aa4600 -> trunk/a354fa91e26b376d96385a2206c5ff5b42aa4600 2025-08-14T21:24:06.2478819Z * [new tag] trunk/a4f69a5da08eace1c1e6469dec6a18aa842da73b -> trunk/a4f69a5da08eace1c1e6469dec6a18aa842da73b 2025-08-14T21:24:06.2479564Z * [new tag] trunk/a53d14d5f846ac44f6c205abb1c5bc4d2f3126ae -> trunk/a53d14d5f846ac44f6c205abb1c5bc4d2f3126ae 2025-08-14T21:24:06.2480017Z * [new tag] trunk/a5652407e4f3d772fc44486ac2abf756decf0861 -> trunk/a5652407e4f3d772fc44486ac2abf756decf0861 2025-08-14T21:24:06.2481002Z * [new tag] trunk/a7abf57aabec0ce686092e2d66e53ba185dbc56b -> trunk/a7abf57aabec0ce686092e2d66e53ba185dbc56b 2025-08-14T21:24:06.2481269Z * [new tag] trunk/a84b60c0c4016785fd93b7b8a0c04f2d0770d332 -> trunk/a84b60c0c4016785fd93b7b8a0c04f2d0770d332 2025-08-14T21:24:06.2482288Z * [new tag] trunk/aa75e917bdb0f95bb6dee81853c2d3c4ab3e1883 -> trunk/aa75e917bdb0f95bb6dee81853c2d3c4ab3e1883 2025-08-14T21:24:06.2482652Z * [new tag] trunk/adcca7d9a1c053495e99012de801b2ea237faad0 -> trunk/adcca7d9a1c053495e99012de801b2ea237faad0 2025-08-14T21:24:06.2483308Z * [new tag] trunk/af10f1f86cc4effc93142a447693d8be55966615 -> trunk/af10f1f86cc4effc93142a447693d8be55966615 2025-08-14T21:24:06.2483890Z * [new tag] trunk/af3cabc55d5699f4da528e1ca39d83338f84ae8c -> trunk/af3cabc55d5699f4da528e1ca39d83338f84ae8c 2025-08-14T21:24:06.2484573Z * [new tag] trunk/b0df7715e8c590c0001d1f9cdb97057be80c9107 -> trunk/b0df7715e8c590c0001d1f9cdb97057be80c9107 2025-08-14T21:24:06.2485097Z * [new tag] trunk/b149c7204c218e7c4d6594a89dd74f72bd480ec5 -> trunk/b149c7204c218e7c4d6594a89dd74f72bd480ec5 2025-08-14T21:24:06.2485718Z * [new tag] trunk/b1a602762e6a6674b406a3137e7e7a678885a97b -> trunk/b1a602762e6a6674b406a3137e7e7a678885a97b 2025-08-14T21:24:06.2486267Z * [new tag] trunk/b1f43548cad8fc0e30bda250f6e196310fa7a4bc -> trunk/b1f43548cad8fc0e30bda250f6e196310fa7a4bc 2025-08-14T21:24:06.2486914Z * [new tag] trunk/b219ca2a00a305753c4f1ea4c9c5d23243d54753 -> trunk/b219ca2a00a305753c4f1ea4c9c5d23243d54753 2025-08-14T21:24:06.2487470Z * [new tag] trunk/b4596895b9d85a686c2cb978938b0a7797b3690a -> trunk/b4596895b9d85a686c2cb978938b0a7797b3690a 2025-08-14T21:24:06.2488130Z * [new tag] trunk/b5fd7223b1bf44720dc9183bda7dfcf7aeccff02 -> trunk/b5fd7223b1bf44720dc9183bda7dfcf7aeccff02 2025-08-14T21:24:06.2488592Z * [new tag] trunk/b602ea9cab7d43a7ee7b4051227090f23fbd3dbf -> trunk/b602ea9cab7d43a7ee7b4051227090f23fbd3dbf 2025-08-14T21:24:06.2489628Z * [new tag] trunk/b6b74aed604bd2e96389ff99aaaf39abc64fdc64 -> trunk/b6b74aed604bd2e96389ff99aaaf39abc64fdc64 2025-08-14T21:24:06.2489854Z * [new tag] trunk/b7db86600a2614adc71c92ca42d359a7ac534d78 -> trunk/b7db86600a2614adc71c92ca42d359a7ac534d78 2025-08-14T21:24:06.2490496Z * [new tag] trunk/b9003ed3d87699e81e436719625a21996a6654e5 -> trunk/b9003ed3d87699e81e436719625a21996a6654e5 2025-08-14T21:24:06.2491078Z * [new tag] trunk/b90feeac86bda00afc2789321bcd706015ff44e3 -> trunk/b90feeac86bda00afc2789321bcd706015ff44e3 2025-08-14T21:24:06.2492430Z * [new tag] trunk/b9d7de3a094598c3dc0dd52e57bce30eb684c9d8 -> trunk/b9d7de3a094598c3dc0dd52e57bce30eb684c9d8 2025-08-14T21:24:06.2492672Z * [new tag] trunk/ba47821f524eee50a214ed39fa2e7765d54aabf4 -> trunk/ba47821f524eee50a214ed39fa2e7765d54aabf4 2025-08-14T21:24:06.2493417Z * [new tag] trunk/ba4ccf5d67e3d237f435eacc2bce3c6025f08491 -> trunk/ba4ccf5d67e3d237f435eacc2bce3c6025f08491 2025-08-14T21:24:06.2494003Z * [new tag] trunk/bcf23ecc476df2bd7479f142567213e2623308ee -> trunk/bcf23ecc476df2bd7479f142567213e2623308ee 2025-08-14T21:24:06.2494515Z * [new tag] trunk/be53f609aaf6f01e2863f490975ea9eaac3ee9ff -> trunk/be53f609aaf6f01e2863f490975ea9eaac3ee9ff 2025-08-14T21:24:06.2495098Z * [new tag] trunk/beb4d7816dedc67a5de1f82e5a45b5910f407941 -> trunk/beb4d7816dedc67a5de1f82e5a45b5910f407941 2025-08-14T21:24:06.2496206Z * [new tag] trunk/bfc873d02ec413344717493e4175a902921359fd -> trunk/bfc873d02ec413344717493e4175a902921359fd 2025-08-14T21:24:06.2496523Z * [new tag] trunk/c184cb3852f0ff2d16a489d61abc3739c309e6ca -> trunk/c184cb3852f0ff2d16a489d61abc3739c309e6ca 2025-08-14T21:24:06.2497048Z * [new tag] trunk/c24ca7f4bf79f62fd623d76346ca27e53f731431 -> trunk/c24ca7f4bf79f62fd623d76346ca27e53f731431 2025-08-14T21:24:06.2497589Z * [new tag] trunk/c3dc8dc4122977893004c49d10e4676cd0a97da4 -> trunk/c3dc8dc4122977893004c49d10e4676cd0a97da4 2025-08-14T21:24:06.2498273Z * [new tag] trunk/c5ec5458a547f7a774468ea0eb2258d3de596492 -> trunk/c5ec5458a547f7a774468ea0eb2258d3de596492 2025-08-14T21:24:06.2498895Z * [new tag] trunk/c5efc5c8a66eca84865015058b3221013ebfe685 -> trunk/c5efc5c8a66eca84865015058b3221013ebfe685 2025-08-14T21:24:06.2499616Z * [new tag] trunk/c6563341208003f64c131854a9cf029555f786d2 -> trunk/c6563341208003f64c131854a9cf029555f786d2 2025-08-14T21:24:06.2500261Z * [new tag] trunk/c6d78d4dbda53837d298d23a5fbc09af90a42d9e -> trunk/c6d78d4dbda53837d298d23a5fbc09af90a42d9e 2025-08-14T21:24:06.2500779Z * [new tag] trunk/c8205cb35435f39d2c26f6c94b45e4adeb6dcb23 -> trunk/c8205cb35435f39d2c26f6c94b45e4adeb6dcb23 2025-08-14T21:24:06.2501819Z * [new tag] trunk/c859ba7114b1fcb49527e090745fa17091d1f8d5 -> trunk/c859ba7114b1fcb49527e090745fa17091d1f8d5 2025-08-14T21:24:06.2502170Z * [new tag] trunk/c86040a8e68f754b90a84099187d3624954c7f36 -> trunk/c86040a8e68f754b90a84099187d3624954c7f36 2025-08-14T21:24:06.2502995Z * [new tag] trunk/c9671dc865aa0fc1cb86df754e355b44d8e02bb4 -> trunk/c9671dc865aa0fc1cb86df754e355b44d8e02bb4 2025-08-14T21:24:06.2503610Z * [new tag] trunk/ca7315c17162ea21b1ca5ba23f4bf6168766c7b9 -> trunk/ca7315c17162ea21b1ca5ba23f4bf6168766c7b9 2025-08-14T21:24:06.2504252Z * [new tag] trunk/cae2b5e3d223829bdc553fc8601df4b1c1554cff -> trunk/cae2b5e3d223829bdc553fc8601df4b1c1554cff 2025-08-14T21:24:06.2504913Z * [new tag] trunk/cbffde774557752cf20447d42d99ec6102673c31 -> trunk/cbffde774557752cf20447d42d99ec6102673c31 2025-08-14T21:24:06.2505472Z * [new tag] trunk/cd8d8c18f5bafdc1c73d5ac0129e7b4d76ab45bc -> trunk/cd8d8c18f5bafdc1c73d5ac0129e7b4d76ab45bc 2025-08-14T21:24:06.2506195Z * [new tag] trunk/cf0a0dcb0afa5e84b95461cc542f862b51ca96bf -> trunk/cf0a0dcb0afa5e84b95461cc542f862b51ca96bf 2025-08-14T21:24:06.2506704Z * [new tag] trunk/cf4964be68fa9f4ffc334f01cce42d7424b1cc81 -> trunk/cf4964be68fa9f4ffc334f01cce42d7424b1cc81 2025-08-14T21:24:06.2507868Z * [new tag] trunk/d0e2240f680ea2a553f7ee8188f52482e130bfd0 -> trunk/d0e2240f680ea2a553f7ee8188f52482e130bfd0 2025-08-14T21:24:06.2508108Z * [new tag] trunk/d1950d4bb5cba8fb6b23e4d283eea5b9801737e2 -> trunk/d1950d4bb5cba8fb6b23e4d283eea5b9801737e2 2025-08-14T21:24:06.2511097Z * [new tag] trunk/d20c4c20e61adecf00335c4d8c22eb1ace472cd3 -> trunk/d20c4c20e61adecf00335c4d8c22eb1ace472cd3 2025-08-14T21:24:06.2511429Z * [new tag] trunk/d25c4f954d599ea512e2f70cd6df101c21479d4c -> trunk/d25c4f954d599ea512e2f70cd6df101c21479d4c 2025-08-14T21:24:06.2511683Z * [new tag] trunk/d3d359dbafa89173a371e2637f22b47398e94a24 -> trunk/d3d359dbafa89173a371e2637f22b47398e94a24 2025-08-14T21:24:06.2511932Z * [new tag] trunk/d46768db04499d07a5b0db984112a6d1b7d3b0c1 -> trunk/d46768db04499d07a5b0db984112a6d1b7d3b0c1 2025-08-14T21:24:06.2512174Z * [new tag] trunk/d4c1a08c89f37d249a0146ff511c82ecc5c53b8f -> trunk/d4c1a08c89f37d249a0146ff511c82ecc5c53b8f 2025-08-14T21:24:06.2512598Z * [new tag] trunk/d556586448f3caab85673c7da0978fe31c7748f7 -> trunk/d556586448f3caab85673c7da0978fe31c7748f7 2025-08-14T21:24:06.2512966Z * [new tag] trunk/d670304001429a1a833255a918ed788d7ec4989a -> trunk/d670304001429a1a833255a918ed788d7ec4989a 2025-08-14T21:24:06.2513638Z * [new tag] trunk/d6786741a77aba200c78002646cc069b7a1799b0 -> trunk/d6786741a77aba200c78002646cc069b7a1799b0 2025-08-14T21:24:06.2514652Z * [new tag] trunk/d68c323692dedcbb74e670801e3502944fd790ff -> trunk/d68c323692dedcbb74e670801e3502944fd790ff 2025-08-14T21:24:06.2514950Z * [new tag] trunk/d8cb3db5339b45e4b745b2b883ef3ecde9843e2c -> trunk/d8cb3db5339b45e4b745b2b883ef3ecde9843e2c 2025-08-14T21:24:06.2515240Z * [new tag] trunk/da1f608ca33f3062535d0a4866d95db19e72fcbd -> trunk/da1f608ca33f3062535d0a4866d95db19e72fcbd 2025-08-14T21:24:06.2515767Z * [new tag] trunk/db0b7f1cc9bb3fe71aaf8b964a644147ae8e1c35 -> trunk/db0b7f1cc9bb3fe71aaf8b964a644147ae8e1c35 2025-08-14T21:24:06.2516914Z * [new tag] trunk/db32b60662b2f2bdcad980127d5dc4b66b02a7e4 -> trunk/db32b60662b2f2bdcad980127d5dc4b66b02a7e4 2025-08-14T21:24:06.2517206Z * [new tag] trunk/db763b17175553ba09637362eb9773a91997a7ad -> trunk/db763b17175553ba09637362eb9773a91997a7ad 2025-08-14T21:24:06.2519759Z * [new tag] trunk/db78943a1ca13a32a3d6045eb15e2b719ee13a2f -> trunk/db78943a1ca13a32a3d6045eb15e2b719ee13a2f 2025-08-14T21:24:06.2520262Z * [new tag] trunk/dc0d18e023d9b7e314ebba0f234b6cb1579dbcfd -> trunk/dc0d18e023d9b7e314ebba0f234b6cb1579dbcfd 2025-08-14T21:24:06.2520503Z * [new tag] trunk/dd21c8a578038ab2841a7ba809a06921093ac9d8 -> trunk/dd21c8a578038ab2841a7ba809a06921093ac9d8 2025-08-14T21:24:06.2520746Z * [new tag] trunk/deea71a90e05eb320c04bebfead5317746637f0d -> trunk/deea71a90e05eb320c04bebfead5317746637f0d 2025-08-14T21:24:06.2521005Z * [new tag] trunk/df55ec7d4b35f6d21691e9dd41c82f27de762948 -> trunk/df55ec7d4b35f6d21691e9dd41c82f27de762948 2025-08-14T21:24:06.2521261Z * [new tag] trunk/e1cf0d496ea85d1807c8c740f296e77bf7bdc1df -> trunk/e1cf0d496ea85d1807c8c740f296e77bf7bdc1df 2025-08-14T21:24:06.2521544Z * [new tag] trunk/e248719ac03c103767ab72034f6b9fd56855bf98 -> trunk/e248719ac03c103767ab72034f6b9fd56855bf98 2025-08-14T21:24:06.2522353Z * [new tag] trunk/e49762026070f66be41bfa6537fbcf9bfc24e558 -> trunk/e49762026070f66be41bfa6537fbcf9bfc24e558 2025-08-14T21:24:06.2522822Z * [new tag] trunk/e4de93f6a3e342bab34d3757cf90ec0ccc87e168 -> trunk/e4de93f6a3e342bab34d3757cf90ec0ccc87e168 2025-08-14T21:24:06.2523393Z * [new tag] trunk/e619c6bb90b9dedaccd3cbeed86a288993a4e33f -> trunk/e619c6bb90b9dedaccd3cbeed86a288993a4e33f 2025-08-14T21:24:06.2524053Z * [new tag] trunk/e63c2b21c186a7d2ab8a8953b8aa1535f2e96e58 -> trunk/e63c2b21c186a7d2ab8a8953b8aa1535f2e96e58 2025-08-14T21:24:06.2524654Z * [new tag] trunk/e7152ff8a6a929a0db7f3f4a72a5b6d471769cd3 -> trunk/e7152ff8a6a929a0db7f3f4a72a5b6d471769cd3 2025-08-14T21:24:06.2525266Z * [new tag] trunk/e96c7c4bb0f6aeae2ab3b6f040f7d67edbec199a -> trunk/e96c7c4bb0f6aeae2ab3b6f040f7d67edbec199a 2025-08-14T21:24:06.2525815Z * [new tag] trunk/e9eb2096a59a79e7a94c3e28a0715e040369f34c -> trunk/e9eb2096a59a79e7a94c3e28a0715e040369f34c 2025-08-14T21:24:06.2526407Z * [new tag] trunk/eac2d9d695a32dd456050f45cac35134ec3809f4 -> trunk/eac2d9d695a32dd456050f45cac35134ec3809f4 2025-08-14T21:24:06.2527011Z * [new tag] trunk/ecde76c764752540edf9ef62a97936c86d984b17 -> trunk/ecde76c764752540edf9ef62a97936c86d984b17 2025-08-14T21:24:06.2527436Z * [new tag] trunk/ecea81117b2fdc52907c97b3c32d779e07b5d55b -> trunk/ecea81117b2fdc52907c97b3c32d779e07b5d55b 2025-08-14T21:24:06.2528069Z * [new tag] trunk/edaa151d0d5a4e75fbec9843f49cc78770eb61fb -> trunk/edaa151d0d5a4e75fbec9843f49cc78770eb61fb 2025-08-14T21:24:06.2528601Z * [new tag] trunk/ee1b0412b919dfb358d5a697b3be49621497fbc2 -> trunk/ee1b0412b919dfb358d5a697b3be49621497fbc2 2025-08-14T21:24:06.2531069Z * [new tag] trunk/ee1fb43450c2e985657f95a91b68328d6f20f24e -> trunk/ee1fb43450c2e985657f95a91b68328d6f20f24e 2025-08-14T21:24:06.2531501Z * [new tag] trunk/ee89cc7a0acd69de25f98fe4ef828546db7b444c -> trunk/ee89cc7a0acd69de25f98fe4ef828546db7b444c 2025-08-14T21:24:06.2531751Z * [new tag] trunk/ee9f8ba11d664b871a9e0c7933fdc8571635b78c -> trunk/ee9f8ba11d664b871a9e0c7933fdc8571635b78c 2025-08-14T21:24:06.2531995Z * [new tag] trunk/eed9dbf70f43ee529fec78ac00ed9a4fd74c6e76 -> trunk/eed9dbf70f43ee529fec78ac00ed9a4fd74c6e76 2025-08-14T21:24:06.2532239Z * [new tag] trunk/f077c2402e4eb5b0ed562b4ee5b7a0503f26ef94 -> trunk/f077c2402e4eb5b0ed562b4ee5b7a0503f26ef94 2025-08-14T21:24:06.2532465Z * [new tag] trunk/f0980fc0bbd656d6c02d23ad97e945353b314f35 -> trunk/f0980fc0bbd656d6c02d23ad97e945353b314f35 2025-08-14T21:24:06.2533021Z * [new tag] trunk/f15ada5c6fad97a7dcbfa4673f067b6942dda640 -> trunk/f15ada5c6fad97a7dcbfa4673f067b6942dda640 2025-08-14T21:24:06.2533575Z * [new tag] trunk/f27232a2134150cb5e55d26a74d8c36c6a961ca5 -> trunk/f27232a2134150cb5e55d26a74d8c36c6a961ca5 2025-08-14T21:24:06.2534155Z * [new tag] trunk/f33ce40bc062a281e1a1f57e8c1926d0a7d155cc -> trunk/f33ce40bc062a281e1a1f57e8c1926d0a7d155cc 2025-08-14T21:24:06.2534671Z * [new tag] trunk/f341077ce4710172da20cfad916ee37159bfe9fe -> trunk/f341077ce4710172da20cfad916ee37159bfe9fe 2025-08-14T21:24:06.2535323Z * [new tag] trunk/f3a4d742ece08de4cb0e59dcc62e0093a7d0b0c7 -> trunk/f3a4d742ece08de4cb0e59dcc62e0093a7d0b0c7 2025-08-14T21:24:06.2535885Z * [new tag] trunk/f3f159ff8c4bad2edec99c68a941c628e983d04c -> trunk/f3f159ff8c4bad2edec99c68a941c628e983d04c 2025-08-14T21:24:06.2536536Z * [new tag] trunk/f60454cce8b93e5bbf67f2f3c88c8ac01ed65457 -> trunk/f60454cce8b93e5bbf67f2f3c88c8ac01ed65457 2025-08-14T21:24:06.2537112Z * [new tag] trunk/f7b2f3314cf7aede67d5fa5c75e4243208484344 -> trunk/f7b2f3314cf7aede67d5fa5c75e4243208484344 2025-08-14T21:24:06.2537780Z * [new tag] trunk/f8f0414a5983ff481a2188e0c18594150430c8c5 -> trunk/f8f0414a5983ff481a2188e0c18594150430c8c5 2025-08-14T21:24:06.2538445Z * [new tag] trunk/f95b58c2844b3444cd8446fed8570729dc4216eb -> trunk/f95b58c2844b3444cd8446fed8570729dc4216eb 2025-08-14T21:24:06.2543980Z * [new tag] trunk/f990490a23815ea6ee27e487c70ba2cf513ba43d -> trunk/f990490a23815ea6ee27e487c70ba2cf513ba43d 2025-08-14T21:24:06.2544260Z * [new tag] trunk/fb887c3bb588cfe782615e67f6c26db636b8539b -> trunk/fb887c3bb588cfe782615e67f6c26db636b8539b 2025-08-14T21:24:06.2544506Z * [new tag] trunk/fc25c68f20f772290927a7031b998b92615259cf -> trunk/fc25c68f20f772290927a7031b998b92615259cf 2025-08-14T21:24:06.2544747Z * [new tag] trunk/fc80f6859e0ccf66513a40f04b9e735e759d4ddb -> trunk/fc80f6859e0ccf66513a40f04b9e735e759d4ddb 2025-08-14T21:24:06.2545000Z * [new tag] trunk/fdfd69bb05488d76123db9cc1cdd90ac4137bbfb -> trunk/fdfd69bb05488d76123db9cc1cdd90ac4137bbfb 2025-08-14T21:24:06.2545264Z * [new tag] trunk/fe3f5fe4ea2ff6f56406dc5d954636ebb08d0a08 -> trunk/fe3f5fe4ea2ff6f56406dc5d954636ebb08d0a08 2025-08-14T21:24:06.2545501Z * [new tag] trunk/fea7e9dd37c02c334b130f6624af6163fde6b2ab -> trunk/fea7e9dd37c02c334b130f6624af6163fde6b2ab 2025-08-14T21:24:06.2545744Z * [new tag] trunk/ff0d56d03592aa03f3ced8359241d21df1783393 -> trunk/ff0d56d03592aa03f3ced8359241d21df1783393 2025-08-14T21:24:06.2549934Z * [new tag] v0.1.1 -> v0.1.1 2025-08-14T21:24:06.2551890Z * [new tag] v0.1.10 -> v0.1.10 2025-08-14T21:24:06.2552125Z * [new tag] v0.1.11 -> v0.1.11 2025-08-14T21:24:06.2557050Z * [new tag] v0.1.12 -> v0.1.12 2025-08-14T21:24:06.2561715Z * [new tag] v0.1.2 -> v0.1.2 2025-08-14T21:24:06.2563762Z * [new tag] v0.1.3 -> v0.1.3 2025-08-14T21:24:06.2564303Z * [new tag] v0.1.4 -> v0.1.4 2025-08-14T21:24:06.2564508Z * [new tag] v0.1.5 -> v0.1.5 2025-08-14T21:24:06.2564759Z * [new tag] v0.1.6 -> v0.1.6 2025-08-14T21:24:06.2564860Z * [new tag] v0.1.7 -> v0.1.7 2025-08-14T21:24:06.2565063Z * [new tag] v0.1.8 -> v0.1.8 2025-08-14T21:24:06.2565174Z * [new tag] v0.1.9 -> v0.1.9 2025-08-14T21:24:06.2565271Z * [new tag] v0.2.0 -> v0.2.0 2025-08-14T21:24:06.2565433Z * [new tag] v0.3.0 -> v0.3.0 2025-08-14T21:24:06.2570340Z * [new tag] v0.3.1 -> v0.3.1 2025-08-14T21:24:06.2572522Z * [new tag] v0.4.0 -> v0.4.0 2025-08-14T21:24:06.2573014Z * [new tag] v0.4.1 -> v0.4.1 2025-08-14T21:24:06.2573135Z * [new tag] v1.0.0 -> v1.0.0 2025-08-14T21:24:06.2573324Z * [new tag] v1.0.0a0 -> v1.0.0a0 2025-08-14T21:24:06.2573443Z * [new tag] v1.0.1 -> v1.0.1 2025-08-14T21:24:06.2573551Z * [new tag] v1.0rc0 -> v1.0rc0 2025-08-14T21:24:06.2573644Z * [new tag] v1.0rc1 -> v1.0rc1 2025-08-14T21:24:06.2573819Z * [new tag] v1.1.0 -> v1.1.0 2025-08-14T21:24:06.2574247Z * [new tag] v1.1.0a0 -> v1.1.0a0 2025-08-14T21:24:06.2574376Z * [new tag] v1.10.0 -> v1.10.0 2025-08-14T21:24:06.2574495Z * [new tag] v1.10.0-rc1 -> v1.10.0-rc1 2025-08-14T21:24:06.2574594Z * [new tag] v1.10.0-rc2 -> v1.10.0-rc2 2025-08-14T21:24:06.2574721Z * [new tag] v1.10.0-rc3 -> v1.10.0-rc3 2025-08-14T21:24:06.2574831Z * [new tag] v1.10.1 -> v1.10.1 2025-08-14T21:24:06.2574929Z * [new tag] v1.10.1-rc1 -> v1.10.1-rc1 2025-08-14T21:24:06.2575029Z * [new tag] v1.10.2 -> v1.10.2 2025-08-14T21:24:06.2575124Z * [new tag] v1.10.2-rc1 -> v1.10.2-rc1 2025-08-14T21:24:06.2575215Z * [new tag] v1.11.0 -> v1.11.0 2025-08-14T21:24:06.2575335Z * [new tag] v1.11.0-rc1 -> v1.11.0-rc1 2025-08-14T21:24:06.2575430Z * [new tag] v1.11.0-rc2 -> v1.11.0-rc2 2025-08-14T21:24:06.2575531Z * [new tag] v1.11.0-rc3 -> v1.11.0-rc3 2025-08-14T21:24:06.2575622Z * [new tag] v1.11.0-rc4 -> v1.11.0-rc4 2025-08-14T21:24:06.2575718Z * [new tag] v1.11.0-rc5 -> v1.11.0-rc5 2025-08-14T21:24:06.2575821Z * [new tag] v1.11.0-rc6 -> v1.11.0-rc6 2025-08-14T21:24:06.2575913Z * [new tag] v1.11.0-rc7 -> v1.11.0-rc7 2025-08-14T21:24:06.2576001Z * [new tag] v1.12.0 -> v1.12.0 2025-08-14T21:24:06.2576101Z * [new tag] v1.12.0-rc1 -> v1.12.0-rc1 2025-08-14T21:24:06.2576193Z * [new tag] v1.12.0-rc2 -> v1.12.0-rc2 2025-08-14T21:24:06.2576292Z * [new tag] v1.12.0-rc3 -> v1.12.0-rc3 2025-08-14T21:24:06.2576383Z * [new tag] v1.12.0-rc4 -> v1.12.0-rc4 2025-08-14T21:24:06.2576473Z * [new tag] v1.12.0-rc5 -> v1.12.0-rc5 2025-08-14T21:24:06.2576571Z * [new tag] v1.12.0-rc6 -> v1.12.0-rc6 2025-08-14T21:24:06.2576667Z * [new tag] v1.12.0-rc7 -> v1.12.0-rc7 2025-08-14T21:24:06.2576897Z * [new tag] v1.12.0-rc8 -> v1.12.0-rc8 2025-08-14T21:24:06.2577007Z * [new tag] v1.12.1 -> v1.12.1 2025-08-14T21:24:06.2577099Z * [new tag] v1.12.1-rc1 -> v1.12.1-rc1 2025-08-14T21:24:06.2577201Z * [new tag] v1.12.1-rc2 -> v1.12.1-rc2 2025-08-14T21:24:06.2577299Z * [new tag] v1.12.1-rc3 -> v1.12.1-rc3 2025-08-14T21:24:06.2577396Z * [new tag] v1.12.1-rc4 -> v1.12.1-rc4 2025-08-14T21:24:06.2577504Z * [new tag] v1.12.1-rc5 -> v1.12.1-rc5 2025-08-14T21:24:06.2577599Z * [new tag] v1.13.0 -> v1.13.0 2025-08-14T21:24:06.2577697Z * [new tag] v1.13.0-rc1 -> v1.13.0-rc1 2025-08-14T21:24:06.2577804Z * [new tag] v1.13.0-rc2 -> v1.13.0-rc2 2025-08-14T21:24:06.2577962Z * [new tag] v1.13.0-rc3 -> v1.13.0-rc3 2025-08-14T21:24:06.2578078Z * [new tag] v1.13.0-rc4 -> v1.13.0-rc4 2025-08-14T21:24:06.2578179Z * [new tag] v1.13.0-rc5 -> v1.13.0-rc5 2025-08-14T21:24:06.2578278Z * [new tag] v1.13.0-rc6 -> v1.13.0-rc6 2025-08-14T21:24:06.2578381Z * [new tag] v1.13.1 -> v1.13.1 2025-08-14T21:24:06.2578478Z * [new tag] v1.13.1-rc1 -> v1.13.1-rc1 2025-08-14T21:24:06.2578601Z * [new tag] v1.2.0 -> v1.2.0 2025-08-14T21:24:06.2579015Z * [new tag] v1.2.0a0 -> v1.2.0a0 2025-08-14T21:24:06.2579583Z * [new tag] v1.3.0 -> v1.3.0 2025-08-14T21:24:06.2580068Z * [new tag] v1.3.0a0 -> v1.3.0a0 2025-08-14T21:24:06.2580567Z * [new tag] v1.3.1 -> v1.3.1 2025-08-14T21:24:06.2581053Z * [new tag] v1.4.0 -> v1.4.0 2025-08-14T21:24:06.2581973Z * [new tag] v1.4.0a0 -> v1.4.0a0 2025-08-14T21:24:06.2582153Z * [new tag] v1.4.1 -> v1.4.1 2025-08-14T21:24:06.2583074Z * [new tag] v1.5.0 -> v1.5.0 2025-08-14T21:24:06.2583303Z * [new tag] v1.5.0-rc1 -> v1.5.0-rc1 2025-08-14T21:24:06.2584316Z * [new tag] v1.5.0-rc2 -> v1.5.0-rc2 2025-08-14T21:24:06.2584712Z * [new tag] v1.5.0-rc3 -> v1.5.0-rc3 2025-08-14T21:24:06.2585345Z * [new tag] v1.5.0-rc4 -> v1.5.0-rc4 2025-08-14T21:24:06.2585592Z * [new tag] v1.5.0-rc5 -> v1.5.0-rc5 2025-08-14T21:24:06.2586593Z * [new tag] v1.5.1 -> v1.5.1 2025-08-14T21:24:06.2586801Z * [new tag] v1.5.1-rc1 -> v1.5.1-rc1 2025-08-14T21:24:06.2588874Z * [new tag] v1.6.0 -> v1.6.0 2025-08-14T21:24:06.2589031Z * [new tag] v1.6.0-rc1 -> v1.6.0-rc1 2025-08-14T21:24:06.2589157Z * [new tag] v1.6.0-rc2 -> v1.6.0-rc2 2025-08-14T21:24:06.2589262Z * [new tag] v1.6.0-rc3 -> v1.6.0-rc3 2025-08-14T21:24:06.2589410Z * [new tag] v1.6.0-rc4 -> v1.6.0-rc4 2025-08-14T21:24:06.2594866Z * [new tag] v1.6.0-rc5 -> v1.6.0-rc5 2025-08-14T21:24:06.2595160Z * [new tag] v1.6.0-rc6 -> v1.6.0-rc6 2025-08-14T21:24:06.2595294Z * [new tag] v1.6.0-rc7 -> v1.6.0-rc7 2025-08-14T21:24:06.2595405Z * [new tag] v1.7.0 -> v1.7.0 2025-08-14T21:24:06.2595810Z * [new tag] v1.7.0-rc1 -> v1.7.0-rc1 2025-08-14T21:24:06.2596026Z * [new tag] v1.7.0-rc2 -> v1.7.0-rc2 2025-08-14T21:24:06.2596333Z * [new tag] v1.7.0-rc3 -> v1.7.0-rc3 2025-08-14T21:24:06.2596438Z * [new tag] v1.7.0-rc4 -> v1.7.0-rc4 2025-08-14T21:24:06.2600594Z * [new tag] v1.7.1 -> v1.7.1 2025-08-14T21:24:06.2600832Z * [new tag] v1.7.1-rc1 -> v1.7.1-rc1 2025-08-14T21:24:06.2600943Z * [new tag] v1.7.1-rc2 -> v1.7.1-rc2 2025-08-14T21:24:06.2601174Z * [new tag] v1.7.1-rc3 -> v1.7.1-rc3 2025-08-14T21:24:06.2601307Z * [new tag] v1.8.0 -> v1.8.0 2025-08-14T21:24:06.2601414Z * [new tag] v1.8.0-rc1 -> v1.8.0-rc1 2025-08-14T21:24:06.2601679Z * [new tag] v1.8.0-rc2 -> v1.8.0-rc2 2025-08-14T21:24:06.2601924Z * [new tag] v1.8.0-rc3 -> v1.8.0-rc3 2025-08-14T21:24:06.2602049Z * [new tag] v1.8.0-rc4 -> v1.8.0-rc4 2025-08-14T21:24:06.2602669Z * [new tag] v1.8.0-rc5 -> v1.8.0-rc5 2025-08-14T21:24:06.2602828Z * [new tag] v1.8.1 -> v1.8.1 2025-08-14T21:24:06.2602957Z * [new tag] v1.8.1-rc1 -> v1.8.1-rc1 2025-08-14T21:24:06.2603067Z * [new tag] v1.8.1-rc2 -> v1.8.1-rc2 2025-08-14T21:24:06.2603180Z * [new tag] v1.8.1-rc3 -> v1.8.1-rc3 2025-08-14T21:24:06.2603284Z * [new tag] v1.8.2 -> v1.8.2 2025-08-14T21:24:06.2603392Z * [new tag] v1.8.2-rc1 -> v1.8.2-rc1 2025-08-14T21:24:06.2603516Z * [new tag] v1.9.0 -> v1.9.0 2025-08-14T21:24:06.2603625Z * [new tag] v1.9.0-rc1 -> v1.9.0-rc1 2025-08-14T21:24:06.2603737Z * [new tag] v1.9.0-rc2 -> v1.9.0-rc2 2025-08-14T21:24:06.2610002Z * [new tag] v1.9.0-rc3 -> v1.9.0-rc3 2025-08-14T21:24:06.2610249Z * [new tag] v1.9.0-rc4 -> v1.9.0-rc4 2025-08-14T21:24:06.2614754Z * [new tag] v1.9.1 -> v1.9.1 2025-08-14T21:24:06.2615058Z * [new tag] v1.9.1-rc1 -> v1.9.1-rc1 2025-08-14T21:24:06.2615205Z * [new tag] v1.9.1-rc2 -> v1.9.1-rc2 2025-08-14T21:24:06.2615314Z * [new tag] v2.0.0 -> v2.0.0 2025-08-14T21:24:06.2615428Z * [new tag] v2.0.0-rc1 -> v2.0.0-rc1 2025-08-14T21:24:06.2615531Z * [new tag] v2.0.0-rc2 -> v2.0.0-rc2 2025-08-14T21:24:06.2615813Z * [new tag] v2.0.0-rc3 -> v2.0.0-rc3 2025-08-14T21:24:06.2616358Z * [new tag] v2.0.0-rc4 -> v2.0.0-rc4 2025-08-14T21:24:06.2616499Z * [new tag] v2.0.0-rc5 -> v2.0.0-rc5 2025-08-14T21:24:06.2616610Z * [new tag] v2.0.0-rc6 -> v2.0.0-rc6 2025-08-14T21:24:06.2616717Z * [new tag] v2.0.1 -> v2.0.1 2025-08-14T21:24:06.2616822Z * [new tag] v2.0.1-rc1 -> v2.0.1-rc1 2025-08-14T21:24:06.2616930Z * [new tag] v2.0.1-rc2 -> v2.0.1-rc2 2025-08-14T21:24:06.2617029Z * [new tag] v2.0.1-rc3 -> v2.0.1-rc3 2025-08-14T21:24:06.2617135Z * [new tag] v2.0.1-rc4 -> v2.0.1-rc4 2025-08-14T21:24:06.2617234Z * [new tag] v2.1.0 -> v2.1.0 2025-08-14T21:24:06.2617345Z * [new tag] v2.1.0-rc1 -> v2.1.0-rc1 2025-08-14T21:24:06.2617586Z * [new tag] v2.1.0-rc2 -> v2.1.0-rc2 2025-08-14T21:24:06.2617692Z * [new tag] v2.1.0-rc3 -> v2.1.0-rc3 2025-08-14T21:24:06.2617791Z * [new tag] v2.1.0-rc4 -> v2.1.0-rc4 2025-08-14T21:24:06.2620155Z * [new tag] v2.1.0-rc5 -> v2.1.0-rc5 2025-08-14T21:24:06.2626552Z * [new tag] v2.1.0-rc6 -> v2.1.0-rc6 2025-08-14T21:24:06.2628533Z * [new tag] v2.1.1 -> v2.1.1 2025-08-14T21:24:06.2628794Z * [new tag] v2.1.1-rc1 -> v2.1.1-rc1 2025-08-14T21:24:06.2636044Z * [new tag] v2.1.1-rc2 -> v2.1.1-rc2 2025-08-14T21:24:06.2639580Z * [new tag] v2.1.1-rc3 -> v2.1.1-rc3 2025-08-14T21:24:06.2641487Z * [new tag] v2.1.1-rc4 -> v2.1.1-rc4 2025-08-14T21:24:06.2641923Z * [new tag] v2.1.1-rc5 -> v2.1.1-rc5 2025-08-14T21:24:06.2642086Z * [new tag] v2.1.1-rc6 -> v2.1.1-rc6 2025-08-14T21:24:06.2642192Z * [new tag] v2.1.2 -> v2.1.2 2025-08-14T21:24:06.2642292Z * [new tag] v2.1.2-rc1 -> v2.1.2-rc1 2025-08-14T21:24:06.2642399Z * [new tag] v2.1.2-rc2 -> v2.1.2-rc2 2025-08-14T21:24:06.2642497Z * [new tag] v2.1.2-rc3 -> v2.1.2-rc3 2025-08-14T21:24:06.2642602Z * [new tag] v2.2.0 -> v2.2.0 2025-08-14T21:24:06.2642701Z * [new tag] v2.2.0-rc1 -> v2.2.0-rc1 2025-08-14T21:24:06.2642799Z * [new tag] v2.2.0-rc2 -> v2.2.0-rc2 2025-08-14T21:24:06.2642905Z * [new tag] v2.2.0-rc3 -> v2.2.0-rc3 2025-08-14T21:24:06.2643023Z * [new tag] v2.2.0-rc4 -> v2.2.0-rc4 2025-08-14T21:24:06.2643119Z * [new tag] v2.2.0-rc5 -> v2.2.0-rc5 2025-08-14T21:24:06.2643221Z * [new tag] v2.2.0-rc6 -> v2.2.0-rc6 2025-08-14T21:24:06.2643316Z * [new tag] v2.2.0-rc7 -> v2.2.0-rc7 2025-08-14T21:24:06.2643419Z * [new tag] v2.2.0-rc8 -> v2.2.0-rc8 2025-08-14T21:24:06.2643516Z * [new tag] v2.2.1 -> v2.2.1 2025-08-14T21:24:06.2643610Z * [new tag] v2.2.1-rc1 -> v2.2.1-rc1 2025-08-14T21:24:06.2643713Z * [new tag] v2.2.1-rc2 -> v2.2.1-rc2 2025-08-14T21:24:06.2643807Z * [new tag] v2.2.1-rc3 -> v2.2.1-rc3 2025-08-14T21:24:06.2643900Z * [new tag] v2.2.2 -> v2.2.2 2025-08-14T21:24:06.2644019Z * [new tag] v2.2.2-rc1 -> v2.2.2-rc1 2025-08-14T21:24:06.2644117Z * [new tag] v2.2.2-rc2 -> v2.2.2-rc2 2025-08-14T21:24:06.2644221Z * [new tag] v2.2.2-rc3 -> v2.2.2-rc3 2025-08-14T21:24:06.2644316Z * [new tag] v2.3.0 -> v2.3.0 2025-08-14T21:24:06.2644414Z * [new tag] v2.3.0-rc1 -> v2.3.0-rc1 2025-08-14T21:24:06.2644553Z * [new tag] v2.3.0-rc10 -> v2.3.0-rc10 2025-08-14T21:24:06.2644663Z * [new tag] v2.3.0-rc11 -> v2.3.0-rc11 2025-08-14T21:24:06.2644767Z * [new tag] v2.3.0-rc12 -> v2.3.0-rc12 2025-08-14T21:24:06.2644866Z * [new tag] v2.3.0-rc2 -> v2.3.0-rc2 2025-08-14T21:24:06.2644961Z * [new tag] v2.3.0-rc3 -> v2.3.0-rc3 2025-08-14T21:24:06.2645068Z * [new tag] v2.3.0-rc4 -> v2.3.0-rc4 2025-08-14T21:24:06.2645368Z * [new tag] v2.3.0-rc5 -> v2.3.0-rc5 2025-08-14T21:24:06.2645470Z * [new tag] v2.3.0-rc6 -> v2.3.0-rc6 2025-08-14T21:24:06.2645576Z * [new tag] v2.3.0-rc7 -> v2.3.0-rc7 2025-08-14T21:24:06.2645673Z * [new tag] v2.3.0-rc8 -> v2.3.0-rc8 2025-08-14T21:24:06.2645779Z * [new tag] v2.3.0-rc9 -> v2.3.0-rc9 2025-08-14T21:24:06.2645877Z * [new tag] v2.3.1 -> v2.3.1 2025-08-14T21:24:06.2645975Z * [new tag] v2.3.1-rc1 -> v2.3.1-rc1 2025-08-14T21:24:06.2646082Z * [new tag] v2.3.1-rc2 -> v2.3.1-rc2 2025-08-14T21:24:06.2646181Z * [new tag] v2.3.1-rc3 -> v2.3.1-rc3 2025-08-14T21:24:06.2646350Z * [new tag] v2.4.0 -> v2.4.0 2025-08-14T21:24:06.2646460Z * [new tag] v2.4.0-rc1 -> v2.4.0-rc1 2025-08-14T21:24:06.2646558Z * [new tag] v2.4.0-rc2 -> v2.4.0-rc2 2025-08-14T21:24:06.2646664Z * [new tag] v2.4.0-rc3 -> v2.4.0-rc3 2025-08-14T21:24:06.2647175Z * [new tag] v2.4.0-rc4 -> v2.4.0-rc4 2025-08-14T21:24:06.2647332Z * [new tag] v2.4.0-rc5 -> v2.4.0-rc5 2025-08-14T21:24:06.2652033Z * [new tag] v2.4.0-rc6 -> v2.4.0-rc6 2025-08-14T21:24:06.2652186Z * [new tag] v2.4.0-rc7 -> v2.4.0-rc7 2025-08-14T21:24:06.2652295Z * [new tag] v2.4.0-rc8 -> v2.4.0-rc8 2025-08-14T21:24:06.2652400Z * [new tag] v2.4.0-rc9 -> v2.4.0-rc9 2025-08-14T21:24:06.2652519Z * [new tag] v2.4.1 -> v2.4.1 2025-08-14T21:24:06.2653051Z * [new tag] v2.4.1-rc1 -> v2.4.1-rc1 2025-08-14T21:24:06.2653321Z * [new tag] v2.4.1-rc2 -> v2.4.1-rc2 2025-08-14T21:24:06.2653430Z * [new tag] v2.4.1-rc3 -> v2.4.1-rc3 2025-08-14T21:24:06.2653536Z * [new tag] v2.5.0 -> v2.5.0 2025-08-14T21:24:06.2653636Z * [new tag] v2.5.0-rc1 -> v2.5.0-rc1 2025-08-14T21:24:06.2653747Z * [new tag] v2.5.0-rc10 -> v2.5.0-rc10 2025-08-14T21:24:06.2654088Z * [new tag] v2.5.0-rc2 -> v2.5.0-rc2 2025-08-14T21:24:06.2655189Z * [new tag] v2.5.0-rc3 -> v2.5.0-rc3 2025-08-14T21:24:06.2655334Z * [new tag] v2.5.0-rc4 -> v2.5.0-rc4 2025-08-14T21:24:06.2656188Z * [new tag] v2.5.0-rc5 -> v2.5.0-rc5 2025-08-14T21:24:06.2656778Z * [new tag] v2.5.0-rc6 -> v2.5.0-rc6 2025-08-14T21:24:06.2657716Z * [new tag] v2.5.0-rc7 -> v2.5.0-rc7 2025-08-14T21:24:06.2658030Z * [new tag] v2.5.0-rc8 -> v2.5.0-rc8 2025-08-14T21:24:06.2659047Z * [new tag] v2.5.0-rc9 -> v2.5.0-rc9 2025-08-14T21:24:06.2659852Z * [new tag] v2.5.1 -> v2.5.1 2025-08-14T21:24:06.2659996Z * [new tag] v2.5.1-rc1 -> v2.5.1-rc1 2025-08-14T21:24:06.2660112Z * [new tag] v2.6.0 -> v2.6.0 2025-08-14T21:24:06.2661154Z * [new tag] v2.6.0-rc1 -> v2.6.0-rc1 2025-08-14T21:24:06.2661408Z * [new tag] v2.6.0-rc2 -> v2.6.0-rc2 2025-08-14T21:24:06.2662403Z * [new tag] v2.6.0-rc3 -> v2.6.0-rc3 2025-08-14T21:24:06.2662553Z * [new tag] v2.6.0-rc4 -> v2.6.0-rc4 2025-08-14T21:24:06.2665850Z * [new tag] v2.6.0-rc5 -> v2.6.0-rc5 2025-08-14T21:24:06.2666001Z * [new tag] v2.6.0-rc6 -> v2.6.0-rc6 2025-08-14T21:24:06.2666117Z * [new tag] v2.6.0-rc7 -> v2.6.0-rc7 2025-08-14T21:24:06.2666224Z * [new tag] v2.6.0-rc8 -> v2.6.0-rc8 2025-08-14T21:24:06.2666363Z * [new tag] v2.6.0-rc9 -> v2.6.0-rc9 2025-08-14T21:24:06.2671438Z * [new tag] v2.7.0 -> v2.7.0 2025-08-14T21:24:06.2671613Z * [new tag] v2.7.0-rc1 -> v2.7.0-rc1 2025-08-14T21:24:06.2671849Z * [new tag] v2.7.0-rc10 -> v2.7.0-rc10 2025-08-14T21:24:06.2671971Z * [new tag] v2.7.0-rc2 -> v2.7.0-rc2 2025-08-14T21:24:06.2672071Z * [new tag] v2.7.0-rc3 -> v2.7.0-rc3 2025-08-14T21:24:06.2672537Z * [new tag] v2.7.0-rc4 -> v2.7.0-rc4 2025-08-14T21:24:06.2672754Z * [new tag] v2.7.0-rc5 -> v2.7.0-rc5 2025-08-14T21:24:06.2672875Z * [new tag] v2.7.0-rc6 -> v2.7.0-rc6 2025-08-14T21:24:06.2675642Z * [new tag] v2.7.0-rc7 -> v2.7.0-rc7 2025-08-14T21:24:06.2675762Z * [new tag] v2.7.0-rc8 -> v2.7.0-rc8 2025-08-14T21:24:06.2675904Z * [new tag] v2.7.0-rc9 -> v2.7.0-rc9 2025-08-14T21:24:06.2676030Z * [new tag] v2.7.1 -> v2.7.1 2025-08-14T21:24:06.2676141Z * [new tag] v2.7.1-rc1 -> v2.7.1-rc1 2025-08-14T21:24:06.2676242Z * [new tag] v2.7.1-rc2 -> v2.7.1-rc2 2025-08-14T21:24:06.2676345Z * [new tag] v2.7.1-rc3 -> v2.7.1-rc3 2025-08-14T21:24:06.2676588Z * [new tag] v2.7.1-rc4 -> v2.7.1-rc4 2025-08-14T21:24:06.2681195Z * [new tag] v2.7.1-rc5 -> v2.7.1-rc5 2025-08-14T21:24:06.2681334Z * [new tag] v2.8.0 -> v2.8.0 2025-08-14T21:24:06.2681433Z * [new tag] v2.8.0-rc1 -> v2.8.0-rc1 2025-08-14T21:24:06.2681531Z * [new tag] v2.8.0-rc2 -> v2.8.0-rc2 2025-08-14T21:24:06.2681760Z * [new tag] v2.8.0-rc3 -> v2.8.0-rc3 2025-08-14T21:24:06.2681880Z * [new tag] v2.8.0-rc4 -> v2.8.0-rc4 2025-08-14T21:24:06.2681986Z * [new tag] v2.8.0-rc5 -> v2.8.0-rc5 2025-08-14T21:24:06.2682210Z * [new tag] v2.8.0-rc6 -> v2.8.0-rc6 2025-08-14T21:24:06.2682312Z * [new tag] v2.8.0-rc7 -> v2.8.0-rc7 2025-08-14T21:24:06.2682461Z * [new tag] v2.8.0-rc8 -> v2.8.0-rc8 2025-08-14T21:24:06.2682601Z * [new tag] whc_flight_1 -> whc_flight_1 2025-08-14T21:24:06.2682723Z * [new tag] whc_flight_2 -> whc_flight_2 2025-08-14T21:24:06.2682835Z * [new tag] whc_flight_4 -> whc_flight_4 2025-08-14T21:24:06.3168141Z [command]/usr/bin/git rev-parse --verify --quiet 1fc683cf17c8c673044538d10266c00f92987be2^{object} 2025-08-14T21:24:06.3192482Z 1fc683cf17c8c673044538d10266c00f92987be2 2025-08-14T21:24:06.3193448Z ##[endgroup] 2025-08-14T21:24:06.3193670Z ##[group]Determining the checkout info 2025-08-14T21:24:06.3193987Z ##[endgroup] 2025-08-14T21:24:06.3199613Z [command]/usr/bin/git sparse-checkout disable 2025-08-14T21:24:06.3242987Z [command]/usr/bin/git config --local --unset-all extensions.worktreeConfig 2025-08-14T21:24:06.3271576Z ##[group]Checking out the ref 2025-08-14T21:24:06.3275920Z [command]/usr/bin/git checkout --progress --force 1fc683cf17c8c673044538d10266c00f92987be2 2025-08-14T21:24:07.3653301Z Updating files: 98% (19106/19474) 2025-08-14T21:24:07.3790716Z Updating files: 99% (19280/19474) 2025-08-14T21:24:07.3791044Z Updating files: 100% (19474/19474) 2025-08-14T21:24:07.3791271Z Updating files: 100% (19474/19474), done. 2025-08-14T21:24:07.4035621Z Note: switching to '1fc683cf17c8c673044538d10266c00f92987be2'. 2025-08-14T21:24:07.4036117Z 2025-08-14T21:24:07.4036422Z You are in 'detached HEAD' state. You can look around, make experimental 2025-08-14T21:24:07.4036811Z changes and commit them, and you can discard any commits you make in this 2025-08-14T21:24:07.4037189Z state without impacting any branches by switching back to a branch. 2025-08-14T21:24:07.4038882Z 2025-08-14T21:24:07.4039146Z If you want to create a new branch to retain commits you create, you may 2025-08-14T21:24:07.4039564Z do so (now or later) by using -c with the switch command. Example: 2025-08-14T21:24:07.4043488Z 2025-08-14T21:24:07.4043762Z git switch -c 2025-08-14T21:24:07.4043947Z 2025-08-14T21:24:07.4044062Z Or undo this operation with: 2025-08-14T21:24:07.4044208Z 2025-08-14T21:24:07.4044283Z git switch - 2025-08-14T21:24:07.4044386Z 2025-08-14T21:24:07.4044570Z Turn off this advice by setting config variable advice.detachedHead to false 2025-08-14T21:24:07.4044808Z 2025-08-14T21:24:07.4045095Z HEAD is now at 1fc683cf17c [Inductor] Allow indexing a flexible layout for extract_input_node_reduction_ranges (#160645) 2025-08-14T21:24:07.4096702Z ##[endgroup] 2025-08-14T21:24:07.4097086Z ##[group]Setting up auth for fetching submodules 2025-08-14T21:24:07.4103707Z [command]/usr/bin/git config --global http.https://github.com/.extraheader AUTHORIZATION: basic *** 2025-08-14T21:24:07.4169321Z [command]/usr/bin/git config --global --unset-all url.https://github.com/.insteadOf 2025-08-14T21:24:07.4203631Z [command]/usr/bin/git config --global --add url.https://github.com/.insteadOf git@github.com: 2025-08-14T21:24:07.4235104Z [command]/usr/bin/git config --global --add url.https://github.com/.insteadOf org-21003710@github.com: 2025-08-14T21:24:07.4277778Z ##[endgroup] 2025-08-14T21:24:07.4278137Z ##[group]Fetching submodules 2025-08-14T21:24:07.4278547Z [command]/usr/bin/git submodule sync --recursive 2025-08-14T21:24:07.4605806Z [command]/usr/bin/git -c protocol.version=2 submodule update --init --force --recursive 2025-08-14T21:24:07.4928930Z Submodule 'android/libs/fbjni' (https://github.com/facebookincubator/fbjni.git) registered for path 'android/libs/fbjni' 2025-08-14T21:24:07.5329003Z Submodule 'third_party/NNPACK_deps/FP16' (https://github.com/Maratyszcza/FP16.git) registered for path 'third_party/FP16' 2025-08-14T21:24:07.5329725Z Submodule 'third_party/NNPACK_deps/FXdiv' (https://github.com/Maratyszcza/FXdiv.git) registered for path 'third_party/FXdiv' 2025-08-14T21:24:07.5330358Z Submodule 'third_party/NNPACK' (https://github.com/Maratyszcza/NNPACK.git) registered for path 'third_party/NNPACK' 2025-08-14T21:24:07.5330984Z Submodule 'third_party/NVTX' (https://github.com/NVIDIA/NVTX.git) registered for path 'third_party/NVTX' 2025-08-14T21:24:07.5354159Z Submodule 'third_party/VulkanMemoryAllocator' (https://github.com/GPUOpen-LibrariesAndSDKs/VulkanMemoryAllocator.git) registered for path 'third_party/VulkanMemoryAllocator' 2025-08-14T21:24:07.5355139Z Submodule 'third_party/XNNPACK' (https://github.com/google/XNNPACK.git) registered for path 'third_party/XNNPACK' 2025-08-14T21:24:07.5355695Z Submodule 'third_party/aiter' (https://github.com/ROCm/aiter.git) registered for path 'third_party/aiter' 2025-08-14T21:24:07.5357195Z Submodule 'third_party/benchmark' (https://github.com/google/benchmark.git) registered for path 'third_party/benchmark' 2025-08-14T21:24:07.5360086Z Submodule 'third_party/composable_kernel' (https://github.com/ROCm/composable_kernel.git) registered for path 'third_party/composable_kernel' 2025-08-14T21:24:07.5378399Z Submodule 'third_party/cpp-httplib' (https://github.com/yhirose/cpp-httplib.git) registered for path 'third_party/cpp-httplib' 2025-08-14T21:24:07.5381257Z Submodule 'third_party/cpuinfo' (https://github.com/pytorch/cpuinfo.git) registered for path 'third_party/cpuinfo' 2025-08-14T21:24:07.5385080Z Submodule 'third_party/cudnn_frontend' (https://github.com/NVIDIA/cudnn-frontend.git) registered for path 'third_party/cudnn_frontend' 2025-08-14T21:24:07.5385918Z Submodule 'third_party/cutlass' (https://github.com/NVIDIA/cutlass.git) registered for path 'third_party/cutlass' 2025-08-14T21:24:07.5406271Z Submodule 'third_party/fbgemm' (https://github.com/pytorch/fbgemm) registered for path 'third_party/fbgemm' 2025-08-14T21:24:07.5407025Z Submodule 'third_party/flash-attention' (https://github.com/Dao-AILab/flash-attention.git) registered for path 'third_party/flash-attention' 2025-08-14T21:24:07.5410074Z Submodule 'third_party/flatbuffers' (https://github.com/google/flatbuffers.git) registered for path 'third_party/flatbuffers' 2025-08-14T21:24:07.5413633Z Submodule 'third_party/fmt' (https://github.com/fmtlib/fmt.git) registered for path 'third_party/fmt' 2025-08-14T21:24:07.5416092Z Submodule 'third_party/gemmlowp/gemmlowp' (https://github.com/google/gemmlowp.git) registered for path 'third_party/gemmlowp/gemmlowp' 2025-08-14T21:24:07.5434619Z Submodule 'third_party/gloo' (https://github.com/pytorch/gloo) registered for path 'third_party/gloo' 2025-08-14T21:24:07.5435204Z Submodule 'third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/googletest' 2025-08-14T21:24:07.5437006Z Submodule 'third_party/ideep' (https://github.com/intel/ideep) registered for path 'third_party/ideep' 2025-08-14T21:24:07.5439668Z Submodule 'third_party/ittapi' (https://github.com/intel/ittapi.git) registered for path 'third_party/ittapi' 2025-08-14T21:24:07.5459126Z Submodule 'third_party/kineto' (https://github.com/pytorch/kineto) registered for path 'third_party/kineto' 2025-08-14T21:24:07.5460535Z Submodule 'third_party/kleidiai' (https://github.com/ARM-software/kleidiai.git) registered for path 'third_party/kleidiai' 2025-08-14T21:24:07.5467150Z Submodule 'third_party/mimalloc' (https://github.com/microsoft/mimalloc.git) registered for path 'third_party/mimalloc' 2025-08-14T21:24:07.5469252Z Submodule 'third_party/nlohmann' (https://github.com/nlohmann/json.git) registered for path 'third_party/nlohmann' 2025-08-14T21:24:07.5469965Z Submodule 'third_party/onnx' (https://github.com/onnx/onnx.git) registered for path 'third_party/onnx' 2025-08-14T21:24:07.5494642Z Submodule 'third_party/opentelemetry-cpp' (https://github.com/open-telemetry/opentelemetry-cpp.git) registered for path 'third_party/opentelemetry-cpp' 2025-08-14T21:24:07.5495539Z Submodule 'third_party/pocketfft' (https://github.com/mreineck/pocketfft) registered for path 'third_party/pocketfft' 2025-08-14T21:24:07.5497403Z Submodule 'third_party/protobuf' (https://github.com/protocolbuffers/protobuf.git) registered for path 'third_party/protobuf' 2025-08-14T21:24:07.5503902Z Submodule 'third_party/NNPACK_deps/psimd' (https://github.com/Maratyszcza/psimd.git) registered for path 'third_party/psimd' 2025-08-14T21:24:07.5517192Z Submodule 'third_party/NNPACK_deps/pthreadpool' (https://github.com/Maratyszcza/pthreadpool.git) registered for path 'third_party/pthreadpool' 2025-08-14T21:24:07.5518055Z Submodule 'third_party/pybind11' (https://github.com/pybind/pybind11.git) registered for path 'third_party/pybind11' 2025-08-14T21:24:07.5522851Z Submodule 'third_party/python-peachpy' (https://github.com/malfet/PeachPy.git) registered for path 'third_party/python-peachpy' 2025-08-14T21:24:07.5526191Z Submodule 'third_party/sleef' (https://github.com/shibatch/sleef) registered for path 'third_party/sleef' 2025-08-14T21:24:07.5529541Z Submodule 'third_party/tensorpipe' (https://github.com/pytorch/tensorpipe.git) registered for path 'third_party/tensorpipe' 2025-08-14T21:24:07.5577989Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/android/libs/fbjni'... 2025-08-14T21:24:07.7942581Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/FP16'... 2025-08-14T21:24:07.7943533Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/FXdiv'... 2025-08-14T21:24:07.7944146Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/psimd'... 2025-08-14T21:24:07.7971607Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/pthreadpool'... 2025-08-14T21:24:07.9848316Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/NNPACK'... 2025-08-14T21:24:07.9849174Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/pocketfft'... 2025-08-14T21:24:07.9849888Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/ideep'... 2025-08-14T21:24:07.9863136Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/pybind11'... 2025-08-14T21:24:09.0777716Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/gemmlowp/gemmlowp'... 2025-08-14T21:24:09.0778550Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/gloo'... 2025-08-14T21:24:09.0779021Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/benchmark'... 2025-08-14T21:24:09.0779512Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kleidiai'... 2025-08-14T21:24:09.0780106Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/NVTX'... 2025-08-14T21:24:09.0780554Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/ittapi'... 2025-08-14T21:24:09.0781024Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/python-peachpy'... 2025-08-14T21:24:09.0781516Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/cpp-httplib'... 2025-08-14T21:24:09.0782011Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/flash-attention'... 2025-08-14T21:24:09.0782495Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/cpuinfo'... 2025-08-14T21:24:09.0782945Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe'... 2025-08-14T21:24:09.0783428Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/googletest'... 2025-08-14T21:24:09.0783893Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/mimalloc'... 2025-08-14T21:24:09.0784343Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/sleef'... 2025-08-14T21:24:09.1779433Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/VulkanMemoryAllocator'... 2025-08-14T21:24:09.3470220Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto'... 2025-08-14T21:24:09.3470757Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/cudnn_frontend'... 2025-08-14T21:24:09.3471224Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fmt'... 2025-08-14T21:24:09.4276896Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/XNNPACK'... 2025-08-14T21:24:21.6189909Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/flatbuffers'... 2025-08-14T21:24:21.6195420Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm'... 2025-08-14T21:24:21.6200015Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/cutlass'... 2025-08-14T21:24:21.6204223Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/onnx'... 2025-08-14T21:24:21.6208616Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/composable_kernel'... 2025-08-14T21:24:21.6209142Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/aiter'... 2025-08-14T21:24:21.6209615Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp'... 2025-08-14T21:24:21.6210141Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/nlohmann'... 2025-08-14T21:24:21.6210877Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/protobuf'... 2025-08-14T21:24:21.6337891Z Submodule path 'android/libs/fbjni': checked out '7e1e1fe3858c63c251c637ae41a20de425dde96f' 2025-08-14T21:24:21.6463949Z Submodule path 'third_party/FP16': checked out '4dfe081cf6bcd15db339cf2680b9281b8451eeb3' 2025-08-14T21:24:21.6563301Z Submodule path 'third_party/FXdiv': checked out 'b408327ac2a15ec3e43352421954f5b1967701d1' 2025-08-14T21:24:21.6782019Z Submodule path 'third_party/NNPACK': checked out 'c07e3a0400713d546e0dea2d5466dd22ea389c73' 2025-08-14T21:24:21.7487322Z Submodule path 'third_party/NVTX': checked out '2942f167cc30c5e3a44a2aecd5b0d9c07ff61a07' 2025-08-14T21:24:21.7932002Z Submodule path 'third_party/VulkanMemoryAllocator': checked out '1d8f600fd424278486eade7ed3e877c99f0846b1' 2025-08-14T21:24:22.3392476Z Submodule path 'third_party/XNNPACK': checked out '51a0103656eff6fc9bfd39a4597923c4b542c883' 2025-08-14T21:24:22.4678785Z Submodule path 'third_party/aiter': checked out '01aae101b9e5e94d6c16a9514c9fb8df99c93150' 2025-08-14T21:24:22.4698814Z Submodule '3rdparty/composable_kernel' (https://github.com/ROCm/composable_kernel.git) registered for path 'third_party/aiter/3rdparty/composable_kernel' 2025-08-14T21:24:22.4726485Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/aiter/3rdparty/composable_kernel'... 2025-08-14T21:24:25.8490184Z Submodule path 'third_party/aiter/3rdparty/composable_kernel': checked out 'cffe8fa2a442ac8e80dd236a1a5d24fe3d7e0cbf' 2025-08-14T21:24:25.8700328Z Submodule path 'third_party/benchmark': checked out '299e5928955cc62af9968370293b916f5130916f' 2025-08-14T21:24:26.1211783Z Submodule path 'third_party/composable_kernel': checked out '7fe50dc3da2069d6645d9deb8c017a876472a977' 2025-08-14T21:24:26.1652858Z Submodule path 'third_party/cpp-httplib': checked out '3af7f2c16147f3fbc6e4d717032daf505dc1652c' 2025-08-14T21:24:26.2534010Z Submodule path 'third_party/cpuinfo': checked out '5e3d2445e6a84d9599bee2bf78edbb4d80865e1d' 2025-08-14T21:24:26.2928497Z Submodule path 'third_party/cudnn_frontend': checked out 'f937055efc6d414d11f4c6577e3977fe74f35fb6' 2025-08-14T21:24:26.8177748Z Submodule path 'third_party/cutlass': checked out 'e51efbfe18fe4f4cbb66ab814c55bf4aa0185491' 2025-08-14T21:24:26.9295137Z Submodule path 'third_party/fbgemm': checked out '21c7d30c526c0f1ad873ecc632dca6cfa8a69067' 2025-08-14T21:24:26.9312639Z Submodule 'external/asmjit' (https://github.com/asmjit/asmjit.git) registered for path 'third_party/fbgemm/external/asmjit' 2025-08-14T21:24:26.9313525Z Submodule 'external/composable_kernel' (https://github.com/jwfromm/composable_kernel.git) registered for path 'third_party/fbgemm/external/composable_kernel' 2025-08-14T21:24:26.9314324Z Submodule 'external/cpuinfo' (https://github.com/pytorch/cpuinfo) registered for path 'third_party/fbgemm/external/cpuinfo' 2025-08-14T21:24:26.9319642Z Submodule 'external/cutlass' (https://github.com/jwfromm/cutlass) registered for path 'third_party/fbgemm/external/cutlass' 2025-08-14T21:24:26.9320297Z Submodule 'external/googletest' (https://github.com/google/googletest) registered for path 'third_party/fbgemm/external/googletest' 2025-08-14T21:24:26.9320984Z Submodule 'external/hipify_torch' (https://github.com/ROCmSoftwarePlatform/hipify_torch.git) registered for path 'third_party/fbgemm/external/hipify_torch' 2025-08-14T21:24:26.9321634Z Submodule 'external/json' (https://github.com/nlohmann/json.git) registered for path 'third_party/fbgemm/external/json' 2025-08-14T21:24:26.9353712Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm/external/asmjit'... 2025-08-14T21:24:28.1843491Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm/external/hipify_torch'... 2025-08-14T21:24:28.1844110Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm/external/cpuinfo'... 2025-08-14T21:24:28.1844954Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm/external/googletest'... 2025-08-14T21:24:28.1845519Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm/external/composable_kernel'... 2025-08-14T21:24:28.2847991Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm/external/cutlass'... 2025-08-14T21:24:29.2488685Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm/external/json'... 2025-08-14T21:24:33.4725262Z Submodule path 'third_party/fbgemm/external/asmjit': checked out 'a3199e8857792cd10b7589ff5d58343d2c9008ea' 2025-08-14T21:24:33.6780208Z Submodule path 'third_party/fbgemm/external/composable_kernel': checked out 'b1281b8b08d973a7064f864f47eeb30f3e2596e9' 2025-08-14T21:24:33.7686469Z Submodule path 'third_party/fbgemm/external/cpuinfo': checked out '6543fec09b2f04ac4a666882998b534afc9c1349' 2025-08-14T21:24:34.2789298Z Submodule path 'third_party/fbgemm/external/cutlass': checked out 'b40777404c174b9694a870bff5c13ce6b7f656ad' 2025-08-14T21:24:34.3198695Z Submodule path 'third_party/fbgemm/external/googletest': checked out '52eb8108c5bdec04579160ae17225d66034bd723' 2025-08-14T21:24:34.3320026Z Submodule path 'third_party/fbgemm/external/hipify_torch': checked out 'a4337c69fe0e2552a7b7b0669178926beeed828c' 2025-08-14T21:24:34.4207585Z Submodule path 'third_party/fbgemm/external/json': checked out '9cca280a4d0ccf0c08f47a99aa71d1b0e52f8d03' 2025-08-14T21:24:34.4787242Z Submodule path 'third_party/flash-attention': checked out '979702c87a8713a8e0a5e9fee122b90d2ef13be5' 2025-08-14T21:24:34.4805911Z Submodule 'csrc/composable_kernel' (https://github.com/ROCm/composable_kernel.git) registered for path 'third_party/flash-attention/csrc/composable_kernel' 2025-08-14T21:24:34.4806882Z Submodule 'csrc/cutlass' (https://github.com/NVIDIA/cutlass.git) registered for path 'third_party/flash-attention/csrc/cutlass' 2025-08-14T21:24:34.4838479Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/flash-attention/csrc/composable_kernel'... 2025-08-14T21:24:37.6860430Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/flash-attention/csrc/cutlass'... 2025-08-14T21:24:37.8843400Z Submodule path 'third_party/flash-attention/csrc/composable_kernel': checked out '888317e698e9803c62bd38568abc9e05d7709f33' 2025-08-14T21:24:38.3862567Z Submodule path 'third_party/flash-attention/csrc/cutlass': checked out 'c506e16788cb08416a4a57e11a9067beeee29420' 2025-08-14T21:24:38.5009023Z Submodule path 'third_party/flatbuffers': checked out 'a2cd1ea3b6d3fee220106b5fed3f7ce8da9eb757' 2025-08-14T21:24:38.5332129Z Submodule path 'third_party/fmt': checked out '40626af88bd7df9a5fb80be7b25ac85b122d6c21' 2025-08-14T21:24:38.5696059Z Submodule path 'third_party/gemmlowp/gemmlowp': checked out '3fb5c176c17c765a3492cd2f0321b0dab712f350' 2025-08-14T21:24:38.5927466Z Submodule path 'third_party/gloo': checked out 'c7b7b022c124d9643957d9bd55f57ac59fce8fa2' 2025-08-14T21:24:38.6353591Z Submodule path 'third_party/googletest': checked out '52eb8108c5bdec04579160ae17225d66034bd723' 2025-08-14T21:24:38.6484188Z Submodule path 'third_party/ideep': checked out '719d8e6cd7f7a0e01b155657526d693acf97c2b3' 2025-08-14T21:24:38.6499849Z Submodule 'mkl-dnn' (https://github.com/intel/mkl-dnn.git) registered for path 'third_party/ideep/mkl-dnn' 2025-08-14T21:24:38.6528962Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/ideep/mkl-dnn'... 2025-08-14T21:24:50.2250019Z Submodule path 'third_party/ideep/mkl-dnn': checked out '8d263e693366ef8db40acc569cc7d8edf644556d' 2025-08-14T21:24:50.2442999Z Submodule path 'third_party/ittapi': checked out 'dec1d23ca65ab069d225dfe40dea14f455170959' 2025-08-14T21:24:50.3323733Z Submodule path 'third_party/kineto': checked out '5e7501833f1021ce6f618572d3baf657b6319658' 2025-08-14T21:24:50.3340606Z Submodule 'libkineto/third_party/dynolog' (https://github.com/facebookincubator/dynolog.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog' 2025-08-14T21:24:50.3342175Z Submodule 'libkineto/third_party/fmt' (https://github.com/fmtlib/fmt.git) registered for path 'third_party/kineto/libkineto/third_party/fmt' 2025-08-14T21:24:50.3342972Z Submodule 'libkineto/third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/kineto/libkineto/third_party/googletest' 2025-08-14T21:24:50.3371654Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog'... 2025-08-14T21:24:50.9993313Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/fmt'... 2025-08-14T21:24:51.6432599Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/googletest'... 2025-08-14T21:24:51.7153551Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog': checked out '7d04a0053a845370ae06ce317a22a48e9edcc74e' 2025-08-14T21:24:51.7174841Z Submodule 'third_party/DCGM' (https://github.com/NVIDIA/DCGM.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-08-14T21:24:51.7175698Z Submodule 'third_party/cpr' (https://github.com/libcpr/cpr.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-08-14T21:24:51.7176440Z Submodule 'third_party/fmt' (https://github.com/fmtlib/fmt.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-08-14T21:24:51.7177224Z Submodule 'third_party/gflags' (https://github.com/gflags/gflags.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-08-14T21:24:51.7177944Z Submodule 'third_party/glog' (https://github.com/google/glog.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-08-14T21:24:51.7178675Z Submodule 'third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-08-14T21:24:51.7179420Z Submodule 'third_party/json' (https://github.com/nlohmann/json.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-08-14T21:24:51.7185875Z Submodule 'third_party/pfs' (https://github.com/dtrugman/pfs.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-08-14T21:24:51.7209744Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM'... 2025-08-14T21:24:52.9198150Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/pfs'... 2025-08-14T21:24:52.9198974Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/gflags'... 2025-08-14T21:24:52.9199687Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/cpr'... 2025-08-14T21:24:52.9200427Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/glog'... 2025-08-14T21:24:52.9201123Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/googletest'... 2025-08-14T21:24:52.9898914Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/fmt'... 2025-08-14T21:24:53.0901558Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/json'... 2025-08-14T21:24:58.3923787Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM': checked out 'ffde4e54bc7249a6039a5e6b45b395141e1217f9' 2025-08-14T21:24:58.4081143Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr': checked out '871ed52d350214a034f6ef8a3b8f51c5ce1bd400' 2025-08-14T21:24:58.4410755Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt': checked out 'cd4af11efc9c622896a3e4cb599fa28668ca3d05' 2025-08-14T21:24:58.4544374Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags': checked out 'e171aa2d15ed9eb17054558e0b3a6a413bb01067' 2025-08-14T21:24:58.4557425Z Submodule 'doc' (https://github.com/gflags/gflags.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-08-14T21:24:58.4589238Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc'... 2025-08-14T21:24:58.7498144Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc': checked out '8411df715cf522606e3b1aca386ddfc0b63d34b4' 2025-08-14T21:24:58.7675426Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog': checked out 'b33e3bad4c46c8a6345525fd822af355e5ef9446' 2025-08-14T21:24:58.8055043Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest': checked out '58d77fa8070e8cec2dc1ed015d66b454c8d78850' 2025-08-14T21:24:58.8935683Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/json': checked out '4f8fba14066156b73f1189a2b8bd568bde5284c5' 2025-08-14T21:24:58.9094284Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs': checked out 'f68a2fa8ea36c783bdd760371411fcb495aa3150' 2025-08-14T21:24:58.9408718Z Submodule path 'third_party/kineto/libkineto/third_party/fmt': checked out '0041a40c1350ba702d475b9c4ad62da77caea164' 2025-08-14T21:24:58.9962904Z Submodule path 'third_party/kineto/libkineto/third_party/googletest': checked out '7aca84427f224eeed3144123d5230d5871e93347' 2025-08-14T21:24:59.0354553Z Submodule path 'third_party/kleidiai': checked out 'cca02c2f69dd18e1f12647c1c0bdc8cf90e680c7' 2025-08-14T21:24:59.0692124Z Submodule path 'third_party/mimalloc': checked out 'fbd8b99c2b828428947d70fdc046bb55609be93e' 2025-08-14T21:24:59.1672839Z Submodule path 'third_party/nlohmann': checked out '55f93686c01528224f448c19128836e7df245f72' 2025-08-14T21:24:59.4614216Z Submodule path 'third_party/onnx': checked out 'e709452ef2bbc1d113faf678c24e6d3467696e83' 2025-08-14T21:24:59.4638535Z Submodule 'third_party/pybind11' (https://github.com/pybind/pybind11.git) registered for path 'third_party/onnx/third_party/pybind11' 2025-08-14T21:24:59.4669256Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/onnx/third_party/pybind11'... 2025-08-14T21:25:01.4120245Z Submodule path 'third_party/onnx/third_party/pybind11': checked out 'a2e59f0e7065404b44dfe92a28aca47ba1378dc4' 2025-08-14T21:25:01.4653778Z Submodule path 'third_party/opentelemetry-cpp': checked out 'a799f4aed9c94b765dcdaabaeab7d5e7e2310878' 2025-08-14T21:25:01.4671513Z Submodule 'third_party/benchmark' (https://github.com/google/benchmark) registered for path 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-08-14T21:25:01.4676087Z Submodule 'third_party/googletest' (https://github.com/google/googletest) registered for path 'third_party/opentelemetry-cpp/third_party/googletest' 2025-08-14T21:25:01.4680666Z Submodule 'third_party/ms-gsl' (https://github.com/microsoft/GSL) registered for path 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-08-14T21:25:01.4682159Z Submodule 'third_party/nlohmann-json' (https://github.com/nlohmann/json) registered for path 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-08-14T21:25:01.4682917Z Submodule 'third_party/opentelemetry-proto' (https://github.com/open-telemetry/opentelemetry-proto) registered for path 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-08-14T21:25:01.4683711Z Submodule 'third_party/opentracing-cpp' (https://github.com/opentracing/opentracing-cpp.git) registered for path 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-08-14T21:25:01.4684719Z Submodule 'third_party/prometheus-cpp' (https://github.com/jupp0r/prometheus-cpp) registered for path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-08-14T21:25:01.4685358Z Submodule 'tools/vcpkg' (https://github.com/Microsoft/vcpkg) registered for path 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-08-14T21:25:01.4709433Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/benchmark'... 2025-08-14T21:25:01.9620022Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/opentracing-cpp'... 2025-08-14T21:25:01.9621379Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/opentelemetry-proto'... 2025-08-14T21:25:01.9622089Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/ms-gsl'... 2025-08-14T21:25:01.9622764Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/prometheus-cpp'... 2025-08-14T21:25:02.0621882Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/googletest'... 2025-08-14T21:25:02.5648903Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/nlohmann-json'... 2025-08-14T21:25:09.7157289Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/tools/vcpkg'... 2025-08-14T21:25:10.1297276Z Submodule path 'third_party/opentelemetry-cpp/third_party/benchmark': checked out 'd572f4777349d43653b21d6c2fc63020ab326db2' 2025-08-14T21:25:10.1665006Z Submodule path 'third_party/opentelemetry-cpp/third_party/googletest': checked out 'b796f7d44681514f58a683a3a71ff17c94edb0c1' 2025-08-14T21:25:10.1822772Z Submodule path 'third_party/opentelemetry-cpp/third_party/ms-gsl': checked out '6f4529395c5b7c2d661812257cd6780c67e54afa' 2025-08-14T21:25:10.2791672Z Submodule path 'third_party/opentelemetry-cpp/third_party/nlohmann-json': checked out 'bc889afb4c5bf1c0d8ee29ef35eaaf4c8bef8a5d' 2025-08-14T21:25:10.2922284Z Submodule path 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto': checked out '4ca4f0335c63cda7ab31ea7ed70d6553aee14dce' 2025-08-14T21:25:10.3058393Z Submodule path 'third_party/opentelemetry-cpp/third_party/opentracing-cpp': checked out '06b57f48ded1fa3bdd3d4346f6ef29e40e08eaf5' 2025-08-14T21:25:10.3196999Z Submodule path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp': checked out 'c9ffcdda9086ffd9e1283ea7a0276d831f3c8a8d' 2025-08-14T21:25:10.3213233Z Submodule 'civetweb' (https://github.com/civetweb/civetweb.git) registered for path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-08-14T21:25:10.3214318Z Submodule 'googletest' (https://github.com/google/googletest.git) registered for path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-08-14T21:25:10.3240607Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb'... 2025-08-14T21:25:12.2672986Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest'... 2025-08-14T21:25:12.4946304Z Submodule path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb': checked out 'eefb26f82b233268fc98577d265352720d477ba4' 2025-08-14T21:25:12.5370690Z Submodule path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest': checked out 'e2239ee6043f73722e7aa812a459f54a28552929' 2025-08-14T21:25:12.8742827Z Submodule path 'third_party/opentelemetry-cpp/tools/vcpkg': checked out '8eb57355a4ffb410a2e94c07b4dca2dffbee8e50' 2025-08-14T21:25:12.8869837Z Submodule path 'third_party/pocketfft': checked out '0fa0ef591e38c2758e3184c6c23e497b9f732ffa' 2025-08-14T21:25:13.1135648Z Submodule path 'third_party/protobuf': checked out 'd1eca4e4b421cd2997495c4b4e65cea6be4e9b8a' 2025-08-14T21:25:13.1160616Z Submodule 'third_party/benchmark' (https://github.com/google/benchmark.git) registered for path 'third_party/protobuf/third_party/benchmark' 2025-08-14T21:25:13.1162119Z Submodule 'third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/protobuf/third_party/googletest' 2025-08-14T21:25:13.1190735Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/protobuf/third_party/benchmark'... 2025-08-14T21:25:13.6735998Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/protobuf/third_party/googletest'... 2025-08-14T21:25:14.0844934Z Submodule path 'third_party/protobuf/third_party/benchmark': checked out '5b7683f49e1e9223cf9927b24f6fd3d6bd82e3f8' 2025-08-14T21:25:14.1496840Z Submodule path 'third_party/protobuf/third_party/googletest': checked out '5ec7f0c4a113e2f18ac2c6cc7df51ad6afc24081' 2025-08-14T21:25:14.1596555Z Submodule path 'third_party/psimd': checked out '072586a71b55b7f8c584153d223e95687148a900' 2025-08-14T21:25:14.1713153Z Submodule path 'third_party/pthreadpool': checked out '4fe0e1e183925bf8cfa6aae24237e724a96479b8' 2025-08-14T21:25:14.2047762Z Submodule path 'third_party/pybind11': checked out 'a2e59f0e7065404b44dfe92a28aca47ba1378dc4' 2025-08-14T21:25:14.2308996Z Submodule path 'third_party/python-peachpy': checked out 'f45429b087dd7d5bc78bb40dc7cf06425c252d67' 2025-08-14T21:25:14.2706948Z Submodule path 'third_party/sleef': checked out '5a1d179df9cf652951b59010a2d2075372d67f68' 2025-08-14T21:25:14.2942567Z Submodule path 'third_party/tensorpipe': checked out 'dacda0567d9f23d4bc503e1c4f84aa65f33ac38a' 2025-08-14T21:25:14.2961449Z Submodule 'third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/tensorpipe/third_party/googletest' 2025-08-14T21:25:14.2962359Z Submodule 'third_party/libnop' (https://github.com/google/libnop.git) registered for path 'third_party/tensorpipe/third_party/libnop' 2025-08-14T21:25:14.2963033Z Submodule 'third_party/libuv' (https://github.com/libuv/libuv.git) registered for path 'third_party/tensorpipe/third_party/libuv' 2025-08-14T21:25:14.2963653Z Submodule 'third_party/pybind11' (https://github.com/pybind/pybind11.git) registered for path 'third_party/tensorpipe/third_party/pybind11' 2025-08-14T21:25:14.2993506Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe/third_party/googletest'... 2025-08-14T21:25:15.2535561Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe/third_party/libnop'... 2025-08-14T21:25:15.3050081Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe/third_party/libuv'... 2025-08-14T21:25:15.5170321Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe/third_party/pybind11'... 2025-08-14T21:25:15.5705994Z Submodule path 'third_party/tensorpipe/third_party/googletest': checked out 'aee0f9d9b5b87796ee8a0ab26b7587ec30e8858e' 2025-08-14T21:25:15.5860505Z Submodule path 'third_party/tensorpipe/third_party/libnop': checked out '910b55815be16109f04f4180e9adee14fb4ce281' 2025-08-14T21:25:15.6529052Z Submodule path 'third_party/tensorpipe/third_party/libuv': checked out '5152db2cbfeb5582e9c27c5ea1dba2cd9e10759b' 2025-08-14T21:25:15.6800187Z Submodule path 'third_party/tensorpipe/third_party/pybind11': checked out 'a23996fce38ff6ccfbcdc09f1e63f2c4be5ea2ef' 2025-08-14T21:25:15.6817917Z Submodule 'tools/clang' (https://github.com/wjakob/clang-cindex-python3) registered for path 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-08-14T21:25:15.6849498Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe/third_party/pybind11/tools/clang'... 2025-08-14T21:25:15.8877901Z Submodule path 'third_party/tensorpipe/third_party/pybind11/tools/clang': checked out '6a00cbc4a9b8e68b71caf7f774b3f9c753ae84d5' 2025-08-14T21:25:15.8917320Z [command]/usr/bin/git submodule foreach --recursive git config --local gc.auto 0 2025-08-14T21:25:15.9240702Z Entering 'android/libs/fbjni' 2025-08-14T21:25:15.9286933Z Entering 'third_party/FP16' 2025-08-14T21:25:15.9332811Z Entering 'third_party/FXdiv' 2025-08-14T21:25:15.9377127Z Entering 'third_party/NNPACK' 2025-08-14T21:25:15.9415149Z Entering 'third_party/NVTX' 2025-08-14T21:25:15.9458114Z Entering 'third_party/VulkanMemoryAllocator' 2025-08-14T21:25:15.9501071Z Entering 'third_party/XNNPACK' 2025-08-14T21:25:15.9559984Z Entering 'third_party/aiter' 2025-08-14T21:25:15.9603593Z Entering 'third_party/aiter/3rdparty/composable_kernel' 2025-08-14T21:25:15.9655645Z Entering 'third_party/benchmark' 2025-08-14T21:25:15.9696724Z Entering 'third_party/composable_kernel' 2025-08-14T21:25:15.9746508Z Entering 'third_party/cpp-httplib' 2025-08-14T21:25:15.9790139Z Entering 'third_party/cpuinfo' 2025-08-14T21:25:15.9830176Z Entering 'third_party/cudnn_frontend' 2025-08-14T21:25:15.9880588Z Entering 'third_party/cutlass' 2025-08-14T21:25:15.9928912Z Entering 'third_party/fbgemm' 2025-08-14T21:25:15.9974797Z Entering 'third_party/fbgemm/external/asmjit' 2025-08-14T21:25:16.0013537Z Entering 'third_party/fbgemm/external/composable_kernel' 2025-08-14T21:25:16.0064039Z Entering 'third_party/fbgemm/external/cpuinfo' 2025-08-14T21:25:16.0103242Z Entering 'third_party/fbgemm/external/cutlass' 2025-08-14T21:25:16.0153311Z Entering 'third_party/fbgemm/external/googletest' 2025-08-14T21:25:16.0197677Z Entering 'third_party/fbgemm/external/hipify_torch' 2025-08-14T21:25:16.0240451Z Entering 'third_party/fbgemm/external/json' 2025-08-14T21:25:16.0279931Z Entering 'third_party/flash-attention' 2025-08-14T21:25:16.0325311Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-08-14T21:25:16.0375387Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-08-14T21:25:16.0426321Z Entering 'third_party/flatbuffers' 2025-08-14T21:25:16.0472272Z Entering 'third_party/fmt' 2025-08-14T21:25:16.0515542Z Entering 'third_party/gemmlowp/gemmlowp' 2025-08-14T21:25:16.0559174Z Entering 'third_party/gloo' 2025-08-14T21:25:16.0601784Z Entering 'third_party/googletest' 2025-08-14T21:25:16.0643291Z Entering 'third_party/ideep' 2025-08-14T21:25:16.0684210Z Entering 'third_party/ideep/mkl-dnn' 2025-08-14T21:25:16.0729290Z Entering 'third_party/ittapi' 2025-08-14T21:25:16.0773621Z Entering 'third_party/kineto' 2025-08-14T21:25:16.0810624Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-08-14T21:25:16.0852679Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-08-14T21:25:16.0893934Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-08-14T21:25:16.0934633Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-08-14T21:25:16.0980314Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-08-14T21:25:16.1025043Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-08-14T21:25:16.1067995Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-08-14T21:25:16.1111917Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-08-14T21:25:16.1157529Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-08-14T21:25:16.1200324Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-08-14T21:25:16.1243625Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-08-14T21:25:16.1290845Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-08-14T21:25:16.1334989Z Entering 'third_party/kleidiai' 2025-08-14T21:25:16.1377698Z Entering 'third_party/mimalloc' 2025-08-14T21:25:16.1418496Z Entering 'third_party/nlohmann' 2025-08-14T21:25:16.1463897Z Entering 'third_party/onnx' 2025-08-14T21:25:16.1521145Z Entering 'third_party/onnx/third_party/pybind11' 2025-08-14T21:25:16.1562437Z Entering 'third_party/opentelemetry-cpp' 2025-08-14T21:25:16.1616804Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-08-14T21:25:16.1656934Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-08-14T21:25:16.1696491Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-08-14T21:25:16.1736622Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-08-14T21:25:16.1779041Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-08-14T21:25:16.1822033Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-08-14T21:25:16.1865697Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-08-14T21:25:16.1904790Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-08-14T21:25:16.1949544Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-08-14T21:25:16.1994807Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-08-14T21:25:16.2057650Z Entering 'third_party/pocketfft' 2025-08-14T21:25:16.2098771Z Entering 'third_party/protobuf' 2025-08-14T21:25:16.2147797Z Entering 'third_party/protobuf/third_party/benchmark' 2025-08-14T21:25:16.2191938Z Entering 'third_party/protobuf/third_party/googletest' 2025-08-14T21:25:16.2239297Z Entering 'third_party/psimd' 2025-08-14T21:25:16.2279161Z Entering 'third_party/pthreadpool' 2025-08-14T21:25:16.2324906Z Entering 'third_party/pybind11' 2025-08-14T21:25:16.2373476Z Entering 'third_party/python-peachpy' 2025-08-14T21:25:16.2415816Z Entering 'third_party/sleef' 2025-08-14T21:25:16.2456373Z Entering 'third_party/tensorpipe' 2025-08-14T21:25:16.2499126Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-08-14T21:25:16.2538729Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-08-14T21:25:16.2581456Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-08-14T21:25:16.2621351Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-08-14T21:25:16.2662496Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-08-14T21:25:16.2718320Z ##[endgroup] 2025-08-14T21:25:16.2718682Z ##[group]Persisting credentials for submodules 2025-08-14T21:25:16.2724621Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'url\.https\:\/\/github\.com\/\.insteadOf' && git config --local --unset-all 'url.https://github.com/.insteadOf' || :" 2025-08-14T21:25:16.3060286Z Entering 'android/libs/fbjni' 2025-08-14T21:25:16.3119755Z Entering 'third_party/FP16' 2025-08-14T21:25:16.3179358Z Entering 'third_party/FXdiv' 2025-08-14T21:25:16.3235462Z Entering 'third_party/NNPACK' 2025-08-14T21:25:16.3300264Z Entering 'third_party/NVTX' 2025-08-14T21:25:16.3364377Z Entering 'third_party/VulkanMemoryAllocator' 2025-08-14T21:25:16.3420302Z Entering 'third_party/XNNPACK' 2025-08-14T21:25:16.3495812Z Entering 'third_party/aiter' 2025-08-14T21:25:16.3554233Z Entering 'third_party/aiter/3rdparty/composable_kernel' 2025-08-14T21:25:16.3615500Z Entering 'third_party/benchmark' 2025-08-14T21:25:16.3677562Z Entering 'third_party/composable_kernel' 2025-08-14T21:25:16.3737800Z Entering 'third_party/cpp-httplib' 2025-08-14T21:25:16.3799270Z Entering 'third_party/cpuinfo' 2025-08-14T21:25:16.3860065Z Entering 'third_party/cudnn_frontend' 2025-08-14T21:25:16.3919564Z Entering 'third_party/cutlass' 2025-08-14T21:25:16.3980648Z Entering 'third_party/fbgemm' 2025-08-14T21:25:16.4037904Z Entering 'third_party/fbgemm/external/asmjit' 2025-08-14T21:25:16.4105129Z Entering 'third_party/fbgemm/external/composable_kernel' 2025-08-14T21:25:16.4164609Z Entering 'third_party/fbgemm/external/cpuinfo' 2025-08-14T21:25:16.4219072Z Entering 'third_party/fbgemm/external/cutlass' 2025-08-14T21:25:16.4286985Z Entering 'third_party/fbgemm/external/googletest' 2025-08-14T21:25:16.4341316Z Entering 'third_party/fbgemm/external/hipify_torch' 2025-08-14T21:25:16.4415649Z Entering 'third_party/fbgemm/external/json' 2025-08-14T21:25:16.4480498Z Entering 'third_party/flash-attention' 2025-08-14T21:25:16.4535250Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-08-14T21:25:16.4600427Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-08-14T21:25:16.4667116Z Entering 'third_party/flatbuffers' 2025-08-14T21:25:16.4721517Z Entering 'third_party/fmt' 2025-08-14T21:25:16.4780927Z Entering 'third_party/gemmlowp/gemmlowp' 2025-08-14T21:25:16.4837401Z Entering 'third_party/gloo' 2025-08-14T21:25:16.4892712Z Entering 'third_party/googletest' 2025-08-14T21:25:16.4956663Z Entering 'third_party/ideep' 2025-08-14T21:25:16.5015493Z Entering 'third_party/ideep/mkl-dnn' 2025-08-14T21:25:16.5080854Z Entering 'third_party/ittapi' 2025-08-14T21:25:16.5135948Z Entering 'third_party/kineto' 2025-08-14T21:25:16.5196675Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-08-14T21:25:16.5255292Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-08-14T21:25:16.5310516Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-08-14T21:25:16.5368959Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-08-14T21:25:16.5422360Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-08-14T21:25:16.5475780Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-08-14T21:25:16.5536996Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-08-14T21:25:16.5597857Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-08-14T21:25:16.5657125Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-08-14T21:25:16.5717613Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-08-14T21:25:16.5779507Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-08-14T21:25:16.5837560Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-08-14T21:25:16.5897696Z Entering 'third_party/kleidiai' 2025-08-14T21:25:16.5958647Z Entering 'third_party/mimalloc' 2025-08-14T21:25:16.6019117Z Entering 'third_party/nlohmann' 2025-08-14T21:25:16.6080533Z Entering 'third_party/onnx' 2025-08-14T21:25:16.6153486Z Entering 'third_party/onnx/third_party/pybind11' 2025-08-14T21:25:16.6217151Z Entering 'third_party/opentelemetry-cpp' 2025-08-14T21:25:16.6274930Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-08-14T21:25:16.6332883Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-08-14T21:25:16.6390242Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-08-14T21:25:16.6471228Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-08-14T21:25:16.6513005Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-08-14T21:25:16.6570838Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-08-14T21:25:16.6622488Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-08-14T21:25:16.6680036Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-08-14T21:25:16.6735779Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-08-14T21:25:16.6798066Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-08-14T21:25:16.6874473Z Entering 'third_party/pocketfft' 2025-08-14T21:25:16.6930900Z Entering 'third_party/protobuf' 2025-08-14T21:25:16.6993034Z Entering 'third_party/protobuf/third_party/benchmark' 2025-08-14T21:25:16.7047687Z Entering 'third_party/protobuf/third_party/googletest' 2025-08-14T21:25:16.7098928Z Entering 'third_party/psimd' 2025-08-14T21:25:16.7157241Z Entering 'third_party/pthreadpool' 2025-08-14T21:25:16.7219656Z Entering 'third_party/pybind11' 2025-08-14T21:25:16.7275723Z Entering 'third_party/python-peachpy' 2025-08-14T21:25:16.7333216Z Entering 'third_party/sleef' 2025-08-14T21:25:16.7389857Z Entering 'third_party/tensorpipe' 2025-08-14T21:25:16.7449476Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-08-14T21:25:16.7508118Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-08-14T21:25:16.7562180Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-08-14T21:25:16.7618592Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-08-14T21:25:16.7674480Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-08-14T21:25:16.7758847Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local 'http.https://github.com/.extraheader' 'AUTHORIZATION: basic ***' && git config --local --show-origin --name-only --get-regexp remote.origin.url" 2025-08-14T21:25:16.8095379Z Entering 'android/libs/fbjni' 2025-08-14T21:25:16.8153169Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/android/libs/fbjni/config remote.origin.url 2025-08-14T21:25:16.8167254Z Entering 'third_party/FP16' 2025-08-14T21:25:16.8216647Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/FP16/config remote.origin.url 2025-08-14T21:25:16.8230815Z Entering 'third_party/FXdiv' 2025-08-14T21:25:16.8284963Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/FXdiv/config remote.origin.url 2025-08-14T21:25:16.8301632Z Entering 'third_party/NNPACK' 2025-08-14T21:25:16.8353341Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK/config remote.origin.url 2025-08-14T21:25:16.8374264Z Entering 'third_party/NVTX' 2025-08-14T21:25:16.8420819Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/NVTX/config remote.origin.url 2025-08-14T21:25:16.8442540Z Entering 'third_party/VulkanMemoryAllocator' 2025-08-14T21:25:16.8495363Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/VulkanMemoryAllocator/config remote.origin.url 2025-08-14T21:25:16.8509305Z Entering 'third_party/XNNPACK' 2025-08-14T21:25:16.8568063Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/XNNPACK/config remote.origin.url 2025-08-14T21:25:16.8598755Z Entering 'third_party/aiter' 2025-08-14T21:25:16.8649956Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/aiter/config remote.origin.url 2025-08-14T21:25:16.8669669Z Entering 'third_party/aiter/3rdparty/composable_kernel' 2025-08-14T21:25:16.8721113Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/aiter/modules/3rdparty/composable_kernel/config remote.origin.url 2025-08-14T21:25:16.8741497Z Entering 'third_party/benchmark' 2025-08-14T21:25:16.8793871Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/benchmark/config remote.origin.url 2025-08-14T21:25:16.8816139Z Entering 'third_party/composable_kernel' 2025-08-14T21:25:16.8866913Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/composable_kernel/config remote.origin.url 2025-08-14T21:25:16.8891407Z Entering 'third_party/cpp-httplib' 2025-08-14T21:25:16.8942051Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/cpp-httplib/config remote.origin.url 2025-08-14T21:25:16.8957241Z Entering 'third_party/cpuinfo' 2025-08-14T21:25:16.9009578Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/cpuinfo/config remote.origin.url 2025-08-14T21:25:16.9029237Z Entering 'third_party/cudnn_frontend' 2025-08-14T21:25:16.9081361Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/cudnn_frontend/config remote.origin.url 2025-08-14T21:25:16.9102723Z Entering 'third_party/cutlass' 2025-08-14T21:25:16.9153446Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/cutlass/config remote.origin.url 2025-08-14T21:25:16.9178593Z Entering 'third_party/fbgemm' 2025-08-14T21:25:16.9227631Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/config remote.origin.url 2025-08-14T21:25:16.9245178Z Entering 'third_party/fbgemm/external/asmjit' 2025-08-14T21:25:16.9301661Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/asmjit/config remote.origin.url 2025-08-14T21:25:16.9317463Z Entering 'third_party/fbgemm/external/composable_kernel' 2025-08-14T21:25:16.9367058Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/composable_kernel/config remote.origin.url 2025-08-14T21:25:16.9390861Z Entering 'third_party/fbgemm/external/cpuinfo' 2025-08-14T21:25:16.9444537Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/cpuinfo/config remote.origin.url 2025-08-14T21:25:16.9461197Z Entering 'third_party/fbgemm/external/cutlass' 2025-08-14T21:25:16.9513162Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/cutlass/config remote.origin.url 2025-08-14T21:25:16.9535043Z Entering 'third_party/fbgemm/external/googletest' 2025-08-14T21:25:16.9592511Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/googletest/config remote.origin.url 2025-08-14T21:25:16.9614815Z Entering 'third_party/fbgemm/external/hipify_torch' 2025-08-14T21:25:16.9659640Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/hipify_torch/config remote.origin.url 2025-08-14T21:25:16.9677342Z Entering 'third_party/fbgemm/external/json' 2025-08-14T21:25:16.9727976Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/json/config remote.origin.url 2025-08-14T21:25:16.9749692Z Entering 'third_party/flash-attention' 2025-08-14T21:25:16.9797760Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/flash-attention/config remote.origin.url 2025-08-14T21:25:16.9822428Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-08-14T21:25:16.9862625Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/flash-attention/modules/csrc/composable_kernel/config remote.origin.url 2025-08-14T21:25:16.9882681Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-08-14T21:25:16.9937242Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/flash-attention/modules/csrc/cutlass/config remote.origin.url 2025-08-14T21:25:16.9964413Z Entering 'third_party/flatbuffers' 2025-08-14T21:25:17.0014757Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/flatbuffers/config remote.origin.url 2025-08-14T21:25:17.0032510Z Entering 'third_party/fmt' 2025-08-14T21:25:17.0086508Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fmt/config remote.origin.url 2025-08-14T21:25:17.0108305Z Entering 'third_party/gemmlowp/gemmlowp' 2025-08-14T21:25:17.0155434Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/gemmlowp/gemmlowp/config remote.origin.url 2025-08-14T21:25:17.0179129Z Entering 'third_party/gloo' 2025-08-14T21:25:17.0228634Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/gloo/config remote.origin.url 2025-08-14T21:25:17.0250829Z Entering 'third_party/googletest' 2025-08-14T21:25:17.0300364Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/googletest/config remote.origin.url 2025-08-14T21:25:17.0319729Z Entering 'third_party/ideep' 2025-08-14T21:25:17.0367409Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/ideep/config remote.origin.url 2025-08-14T21:25:17.0390442Z Entering 'third_party/ideep/mkl-dnn' 2025-08-14T21:25:17.0435746Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/ideep/modules/mkl-dnn/config remote.origin.url 2025-08-14T21:25:17.0468771Z Entering 'third_party/ittapi' 2025-08-14T21:25:17.0514409Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/ittapi/config remote.origin.url 2025-08-14T21:25:17.0532129Z Entering 'third_party/kineto' 2025-08-14T21:25:17.0591439Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/config remote.origin.url 2025-08-14T21:25:17.0611933Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-08-14T21:25:17.0662379Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/config remote.origin.url 2025-08-14T21:25:17.0677911Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-08-14T21:25:17.0727534Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/DCGM/config remote.origin.url 2025-08-14T21:25:17.0740726Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-08-14T21:25:17.0790968Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/cpr/config remote.origin.url 2025-08-14T21:25:17.0810736Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-08-14T21:25:17.0859971Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/fmt/config remote.origin.url 2025-08-14T21:25:17.0881201Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-08-14T21:25:17.0931247Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/gflags/config remote.origin.url 2025-08-14T21:25:17.0943704Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-08-14T21:25:17.0996464Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/gflags/modules/doc/config remote.origin.url 2025-08-14T21:25:17.1018685Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-08-14T21:25:17.1075709Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/glog/config remote.origin.url 2025-08-14T21:25:17.1084803Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-08-14T21:25:17.1140446Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/googletest/config remote.origin.url 2025-08-14T21:25:17.1162885Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-08-14T21:25:17.1206131Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/json/config remote.origin.url 2025-08-14T21:25:17.1225015Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-08-14T21:25:17.1280583Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/pfs/config remote.origin.url 2025-08-14T21:25:17.1300049Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-08-14T21:25:17.1351499Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/fmt/config remote.origin.url 2025-08-14T21:25:17.1370651Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-08-14T21:25:17.1419073Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/googletest/config remote.origin.url 2025-08-14T21:25:17.1437399Z Entering 'third_party/kleidiai' 2025-08-14T21:25:17.1487448Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kleidiai/config remote.origin.url 2025-08-14T21:25:17.1506425Z Entering 'third_party/mimalloc' 2025-08-14T21:25:17.1560487Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/mimalloc/config remote.origin.url 2025-08-14T21:25:17.1575813Z Entering 'third_party/nlohmann' 2025-08-14T21:25:17.1626961Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/nlohmann/config remote.origin.url 2025-08-14T21:25:17.1646920Z Entering 'third_party/onnx' 2025-08-14T21:25:17.1689899Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/onnx/config remote.origin.url 2025-08-14T21:25:17.1721088Z Entering 'third_party/onnx/third_party/pybind11' 2025-08-14T21:25:17.1775815Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/onnx/modules/third_party/pybind11/config remote.origin.url 2025-08-14T21:25:17.1796767Z Entering 'third_party/opentelemetry-cpp' 2025-08-14T21:25:17.1848811Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/config remote.origin.url 2025-08-14T21:25:17.1866658Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-08-14T21:25:17.1914225Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/benchmark/config remote.origin.url 2025-08-14T21:25:17.1929077Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-08-14T21:25:17.1976026Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/googletest/config remote.origin.url 2025-08-14T21:25:17.1994121Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-08-14T21:25:17.2041167Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/ms-gsl/config remote.origin.url 2025-08-14T21:25:17.2054063Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-08-14T21:25:17.2103770Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/nlohmann-json/config remote.origin.url 2025-08-14T21:25:17.2126733Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-08-14T21:25:17.2179431Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/opentelemetry-proto/config remote.origin.url 2025-08-14T21:25:17.2195501Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-08-14T21:25:17.2242320Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/opentracing-cpp/config remote.origin.url 2025-08-14T21:25:17.2266430Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-08-14T21:25:17.2313898Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/config remote.origin.url 2025-08-14T21:25:17.2328838Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-08-14T21:25:17.2380256Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/modules/civetweb/config remote.origin.url 2025-08-14T21:25:17.2400039Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-08-14T21:25:17.2450102Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/modules/googletest/config remote.origin.url 2025-08-14T21:25:17.2476866Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-08-14T21:25:17.2527646Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/tools/vcpkg/config remote.origin.url 2025-08-14T21:25:17.2570692Z Entering 'third_party/pocketfft' 2025-08-14T21:25:17.2618019Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/pocketfft/config remote.origin.url 2025-08-14T21:25:17.2638666Z Entering 'third_party/protobuf' 2025-08-14T21:25:17.2690699Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/config remote.origin.url 2025-08-14T21:25:17.2708800Z Entering 'third_party/protobuf/third_party/benchmark' 2025-08-14T21:25:17.2765544Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/modules/third_party/benchmark/config remote.origin.url 2025-08-14T21:25:17.2779056Z Entering 'third_party/protobuf/third_party/googletest' 2025-08-14T21:25:17.2828775Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/modules/third_party/googletest/config remote.origin.url 2025-08-14T21:25:17.2850056Z Entering 'third_party/psimd' 2025-08-14T21:25:17.2907524Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/psimd/config remote.origin.url 2025-08-14T21:25:17.2921396Z Entering 'third_party/pthreadpool' 2025-08-14T21:25:17.2977630Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/pthreadpool/config remote.origin.url 2025-08-14T21:25:17.2995687Z Entering 'third_party/pybind11' 2025-08-14T21:25:17.3047012Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/pybind11/config remote.origin.url 2025-08-14T21:25:17.3072923Z Entering 'third_party/python-peachpy' 2025-08-14T21:25:17.3124501Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/python-peachpy/config remote.origin.url 2025-08-14T21:25:17.3154222Z Entering 'third_party/sleef' 2025-08-14T21:25:17.3199660Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/sleef/config remote.origin.url 2025-08-14T21:25:17.3214327Z Entering 'third_party/tensorpipe' 2025-08-14T21:25:17.3266432Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/config remote.origin.url 2025-08-14T21:25:17.3289682Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-08-14T21:25:17.3330378Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/googletest/config remote.origin.url 2025-08-14T21:25:17.3343884Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-08-14T21:25:17.3395719Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/libnop/config remote.origin.url 2025-08-14T21:25:17.3410870Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-08-14T21:25:17.3470957Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/libuv/config remote.origin.url 2025-08-14T21:25:17.3486797Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-08-14T21:25:17.3538093Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/pybind11/config remote.origin.url 2025-08-14T21:25:17.3559600Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-08-14T21:25:17.3606571Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/pybind11/modules/tools/clang/config remote.origin.url 2025-08-14T21:25:17.4825989Z [command]/usr/bin/git submodule foreach --recursive git config --local --add 'url.https://github.com/.insteadOf' 'git@github.com:' 2025-08-14T21:25:17.5153250Z Entering 'android/libs/fbjni' 2025-08-14T21:25:17.5197825Z Entering 'third_party/FP16' 2025-08-14T21:25:17.5242735Z Entering 'third_party/FXdiv' 2025-08-14T21:25:17.5287181Z Entering 'third_party/NNPACK' 2025-08-14T21:25:17.5329039Z Entering 'third_party/NVTX' 2025-08-14T21:25:17.5374490Z Entering 'third_party/VulkanMemoryAllocator' 2025-08-14T21:25:17.5416303Z Entering 'third_party/XNNPACK' 2025-08-14T21:25:17.5470462Z Entering 'third_party/aiter' 2025-08-14T21:25:17.5511790Z Entering 'third_party/aiter/3rdparty/composable_kernel' 2025-08-14T21:25:17.5559074Z Entering 'third_party/benchmark' 2025-08-14T21:25:17.5602148Z Entering 'third_party/composable_kernel' 2025-08-14T21:25:17.5649859Z Entering 'third_party/cpp-httplib' 2025-08-14T21:25:17.5694807Z Entering 'third_party/cpuinfo' 2025-08-14T21:25:17.5736807Z Entering 'third_party/cudnn_frontend' 2025-08-14T21:25:17.5779208Z Entering 'third_party/cutlass' 2025-08-14T21:25:17.5830743Z Entering 'third_party/fbgemm' 2025-08-14T21:25:17.5871166Z Entering 'third_party/fbgemm/external/asmjit' 2025-08-14T21:25:17.5915862Z Entering 'third_party/fbgemm/external/composable_kernel' 2025-08-14T21:25:17.5962209Z Entering 'third_party/fbgemm/external/cpuinfo' 2025-08-14T21:25:17.6003852Z Entering 'third_party/fbgemm/external/cutlass' 2025-08-14T21:25:17.6049056Z Entering 'third_party/fbgemm/external/googletest' 2025-08-14T21:25:17.6093899Z Entering 'third_party/fbgemm/external/hipify_torch' 2025-08-14T21:25:17.6137027Z Entering 'third_party/fbgemm/external/json' 2025-08-14T21:25:17.6183684Z Entering 'third_party/flash-attention' 2025-08-14T21:25:17.6227729Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-08-14T21:25:17.6279084Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-08-14T21:25:17.6330711Z Entering 'third_party/flatbuffers' 2025-08-14T21:25:17.6377325Z Entering 'third_party/fmt' 2025-08-14T21:25:17.6423065Z Entering 'third_party/gemmlowp/gemmlowp' 2025-08-14T21:25:17.6468708Z Entering 'third_party/gloo' 2025-08-14T21:25:17.6514565Z Entering 'third_party/googletest' 2025-08-14T21:25:17.6558324Z Entering 'third_party/ideep' 2025-08-14T21:25:17.6600456Z Entering 'third_party/ideep/mkl-dnn' 2025-08-14T21:25:17.6649260Z Entering 'third_party/ittapi' 2025-08-14T21:25:17.6693238Z Entering 'third_party/kineto' 2025-08-14T21:25:17.6734414Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-08-14T21:25:17.6774111Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-08-14T21:25:17.6819254Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-08-14T21:25:17.6859354Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-08-14T21:25:17.6900471Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-08-14T21:25:17.6938220Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-08-14T21:25:17.6982164Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-08-14T21:25:17.7024615Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-08-14T21:25:17.7069501Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-08-14T21:25:17.7113146Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-08-14T21:25:17.7158371Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-08-14T21:25:17.7199134Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-08-14T21:25:17.7246691Z Entering 'third_party/kleidiai' 2025-08-14T21:25:17.7289078Z Entering 'third_party/mimalloc' 2025-08-14T21:25:17.7333928Z Entering 'third_party/nlohmann' 2025-08-14T21:25:17.7376224Z Entering 'third_party/onnx' 2025-08-14T21:25:17.7431590Z Entering 'third_party/onnx/third_party/pybind11' 2025-08-14T21:25:17.7476441Z Entering 'third_party/opentelemetry-cpp' 2025-08-14T21:25:17.7521111Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-08-14T21:25:17.7562162Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-08-14T21:25:17.7605319Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-08-14T21:25:17.7647110Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-08-14T21:25:17.7686838Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-08-14T21:25:17.7727712Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-08-14T21:25:17.7770570Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-08-14T21:25:17.7810040Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-08-14T21:25:17.7852737Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-08-14T21:25:17.7896226Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-08-14T21:25:17.7955619Z Entering 'third_party/pocketfft' 2025-08-14T21:25:17.7999361Z Entering 'third_party/protobuf' 2025-08-14T21:25:17.8043998Z Entering 'third_party/protobuf/third_party/benchmark' 2025-08-14T21:25:17.8083847Z Entering 'third_party/protobuf/third_party/googletest' 2025-08-14T21:25:17.8126657Z Entering 'third_party/psimd' 2025-08-14T21:25:17.8173886Z Entering 'third_party/pthreadpool' 2025-08-14T21:25:17.8215819Z Entering 'third_party/pybind11' 2025-08-14T21:25:17.8257353Z Entering 'third_party/python-peachpy' 2025-08-14T21:25:17.8298635Z Entering 'third_party/sleef' 2025-08-14T21:25:17.8340532Z Entering 'third_party/tensorpipe' 2025-08-14T21:25:17.8384057Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-08-14T21:25:17.8422831Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-08-14T21:25:17.8463423Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-08-14T21:25:17.8507071Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-08-14T21:25:17.8545888Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-08-14T21:25:17.8613843Z [command]/usr/bin/git submodule foreach --recursive git config --local --add 'url.https://github.com/.insteadOf' 'org-21003710@github.com:' 2025-08-14T21:25:17.8940072Z Entering 'android/libs/fbjni' 2025-08-14T21:25:17.8987720Z Entering 'third_party/FP16' 2025-08-14T21:25:17.9025456Z Entering 'third_party/FXdiv' 2025-08-14T21:25:17.9068691Z Entering 'third_party/NNPACK' 2025-08-14T21:25:17.9112275Z Entering 'third_party/NVTX' 2025-08-14T21:25:17.9154299Z Entering 'third_party/VulkanMemoryAllocator' 2025-08-14T21:25:17.9199863Z Entering 'third_party/XNNPACK' 2025-08-14T21:25:17.9255145Z Entering 'third_party/aiter' 2025-08-14T21:25:17.9297225Z Entering 'third_party/aiter/3rdparty/composable_kernel' 2025-08-14T21:25:17.9348162Z Entering 'third_party/benchmark' 2025-08-14T21:25:17.9392183Z Entering 'third_party/composable_kernel' 2025-08-14T21:25:17.9439519Z Entering 'third_party/cpp-httplib' 2025-08-14T21:25:17.9485069Z Entering 'third_party/cpuinfo' 2025-08-14T21:25:17.9531487Z Entering 'third_party/cudnn_frontend' 2025-08-14T21:25:17.9580275Z Entering 'third_party/cutlass' 2025-08-14T21:25:17.9631217Z Entering 'third_party/fbgemm' 2025-08-14T21:25:17.9677915Z Entering 'third_party/fbgemm/external/asmjit' 2025-08-14T21:25:17.9720538Z Entering 'third_party/fbgemm/external/composable_kernel' 2025-08-14T21:25:17.9768087Z Entering 'third_party/fbgemm/external/cpuinfo' 2025-08-14T21:25:17.9812100Z Entering 'third_party/fbgemm/external/cutlass' 2025-08-14T21:25:17.9861544Z Entering 'third_party/fbgemm/external/googletest' 2025-08-14T21:25:17.9902618Z Entering 'third_party/fbgemm/external/hipify_torch' 2025-08-14T21:25:17.9943274Z Entering 'third_party/fbgemm/external/json' 2025-08-14T21:25:17.9992932Z Entering 'third_party/flash-attention' 2025-08-14T21:25:18.0031061Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-08-14T21:25:18.0081573Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-08-14T21:25:18.0137044Z Entering 'third_party/flatbuffers' 2025-08-14T21:25:18.0184115Z Entering 'third_party/fmt' 2025-08-14T21:25:18.0224949Z Entering 'third_party/gemmlowp/gemmlowp' 2025-08-14T21:25:18.0268844Z Entering 'third_party/gloo' 2025-08-14T21:25:18.0315855Z Entering 'third_party/googletest' 2025-08-14T21:25:18.0359132Z Entering 'third_party/ideep' 2025-08-14T21:25:18.0397668Z Entering 'third_party/ideep/mkl-dnn' 2025-08-14T21:25:18.0442201Z Entering 'third_party/ittapi' 2025-08-14T21:25:18.0492156Z Entering 'third_party/kineto' 2025-08-14T21:25:18.0532324Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-08-14T21:25:18.0573906Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-08-14T21:25:18.0616620Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-08-14T21:25:18.0659850Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-08-14T21:25:18.0698014Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-08-14T21:25:18.0739559Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-08-14T21:25:18.0784919Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-08-14T21:25:18.0824562Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-08-14T21:25:18.0871004Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-08-14T21:25:18.0915733Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-08-14T21:25:18.0964740Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-08-14T21:25:18.1006751Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-08-14T21:25:18.1049014Z Entering 'third_party/kleidiai' 2025-08-14T21:25:18.1089671Z Entering 'third_party/mimalloc' 2025-08-14T21:25:18.1135627Z Entering 'third_party/nlohmann' 2025-08-14T21:25:18.1180877Z Entering 'third_party/onnx' 2025-08-14T21:25:18.1235315Z Entering 'third_party/onnx/third_party/pybind11' 2025-08-14T21:25:18.1284753Z Entering 'third_party/opentelemetry-cpp' 2025-08-14T21:25:18.1324229Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-08-14T21:25:18.1363626Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-08-14T21:25:18.1408651Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-08-14T21:25:18.1453938Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-08-14T21:25:18.1493135Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-08-14T21:25:18.1534792Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-08-14T21:25:18.1575071Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-08-14T21:25:18.1616805Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-08-14T21:25:18.1657467Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-08-14T21:25:18.1703987Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-08-14T21:25:18.1765227Z Entering 'third_party/pocketfft' 2025-08-14T21:25:18.1807908Z Entering 'third_party/protobuf' 2025-08-14T21:25:18.1848578Z Entering 'third_party/protobuf/third_party/benchmark' 2025-08-14T21:25:18.1891096Z Entering 'third_party/protobuf/third_party/googletest' 2025-08-14T21:25:18.1937490Z Entering 'third_party/psimd' 2025-08-14T21:25:18.1981593Z Entering 'third_party/pthreadpool' 2025-08-14T21:25:18.2022078Z Entering 'third_party/pybind11' 2025-08-14T21:25:18.2060995Z Entering 'third_party/python-peachpy' 2025-08-14T21:25:18.2106131Z Entering 'third_party/sleef' 2025-08-14T21:25:18.2150588Z Entering 'third_party/tensorpipe' 2025-08-14T21:25:18.2198551Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-08-14T21:25:18.2235214Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-08-14T21:25:18.2276979Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-08-14T21:25:18.2318482Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-08-14T21:25:18.2356601Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-08-14T21:25:18.2420008Z ##[endgroup] 2025-08-14T21:25:18.2458872Z [command]/usr/bin/git log -1 --format=%H 2025-08-14T21:25:18.2485500Z 1fc683cf17c8c673044538d10266c00f92987be2 2025-08-14T21:25:18.2667653Z Prepare all required actions 2025-08-14T21:25:18.2668260Z Getting action download info 2025-08-14T21:25:18.4352297Z ##[group]Run ./.github/actions/setup-linux 2025-08-14T21:25:18.4352545Z env: 2025-08-14T21:25:18.4352728Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:25:18.4352904Z ##[endgroup] 2025-08-14T21:25:18.4386557Z ##[group]Run set -euo pipefail 2025-08-14T21:25:18.4386821Z set -euo pipefail 2025-08-14T21:25:18.4387101Z function get_ec2_metadata() { 2025-08-14T21:25:18.4387345Z  # Pulled from instance metadata endpoint for EC2 2025-08-14T21:25:18.4387755Z  # see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html 2025-08-14T21:25:18.4388197Z  category=$1 2025-08-14T21:25:18.4388493Z  # If it is GCP runner (runner name contains gcp), do not run this 2025-08-14T21:25:18.4388781Z  runner_name_str=i-06c8ea4ed8741f176 2025-08-14T21:25:18.4389039Z  if [[ -f /.inarc ]]; then 2025-08-14T21:25:18.4389261Z  echo "ARC Runner, no info on ec2 metadata" 2025-08-14T21:25:18.4389511Z  elif [[ $runner_name_str == *"gcp"* ]]; then 2025-08-14T21:25:18.4389799Z  echo "Runner is from Google Cloud Platform, No info on ec2 metadata" 2025-08-14T21:25:18.4390061Z  else 2025-08-14T21:25:18.4390636Z  curl -H "X-aws-ec2-metadata-token: $(curl -s -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 30")" -fsSL "http://169.254.169.254/latest/meta-data/${category}" 2025-08-14T21:25:18.4391146Z  fi 2025-08-14T21:25:18.4391458Z } 2025-08-14T21:25:18.4391660Z echo "ami-id: $(get_ec2_metadata ami-id)" 2025-08-14T21:25:18.4391932Z echo "instance-id: $(get_ec2_metadata instance-id)" 2025-08-14T21:25:18.4392238Z echo "instance-type: $(get_ec2_metadata instance-type)" 2025-08-14T21:25:18.4392503Z echo "system info $(uname -a)" 2025-08-14T21:25:18.4399849Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:25:18.4400092Z env: 2025-08-14T21:25:18.4400253Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:25:18.4400437Z ##[endgroup] 2025-08-14T21:25:18.4549807Z ami-id: ami-05ffe3c48a9991133 2025-08-14T21:25:18.4654388Z instance-id: i-06c8ea4ed8741f176 2025-08-14T21:25:18.4763082Z instance-type: m7i-flex.8xlarge 2025-08-14T21:25:18.4772460Z system info Linux ip-10-0-19-47.ec2.internal 6.1.141-155.222.amzn2023.x86_64 #1 SMP PREEMPT_DYNAMIC Tue Jun 17 10:29:47 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux 2025-08-14T21:25:18.4820582Z ##[group]Run echo "IN_CONTAINER_RUNNER=$(if [ -f /.inarc ] || [ -f /.incontainer ]; then echo true ; else echo false; fi)" >> "$GITHUB_OUTPUT" 2025-08-14T21:25:18.4821238Z echo "IN_CONTAINER_RUNNER=$(if [ -f /.inarc ] || [ -f /.incontainer ]; then echo true ; else echo false; fi)" >> "$GITHUB_OUTPUT" 2025-08-14T21:25:18.4827122Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:25:18.4827402Z env: 2025-08-14T21:25:18.4827577Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:25:18.4827760Z ##[endgroup] 2025-08-14T21:25:18.4878581Z ##[group]Run if systemctl is-active --quiet docker; then 2025-08-14T21:25:18.4878922Z if systemctl is-active --quiet docker; then 2025-08-14T21:25:18.4879193Z  echo "Docker daemon is running..."; 2025-08-14T21:25:18.4879412Z else 2025-08-14T21:25:18.4879663Z  echo "Starting docker daemon..." && sudo systemctl start docker; 2025-08-14T21:25:18.4879940Z fi 2025-08-14T21:25:18.4884707Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:25:18.4884983Z env: 2025-08-14T21:25:18.4885150Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:25:18.4885342Z ##[endgroup] 2025-08-14T21:25:18.4976699Z Docker daemon is running... 2025-08-14T21:25:18.5008519Z ##[group]Run nick-fields/retry@v3.0.0 2025-08-14T21:25:18.5008742Z with: 2025-08-14T21:25:18.5008904Z shell: bash 2025-08-14T21:25:18.5009209Z timeout_minutes: 5 2025-08-14T21:25:18.5009382Z max_attempts: 3 2025-08-14T21:25:18.5009560Z retry_wait_seconds: 30 2025-08-14T21:25:18.5010962Z command: AWS_ACCOUNT_ID=$(aws sts get-caller-identity|grep Account|cut -f4 -d\") aws ecr get-login-password --region "$AWS_DEFAULT_REGION" | docker login --username AWS \ --password-stdin "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com" # For LF Runners we need to make sure we also login to Meta's ECR docker registry too. META_AWS_ACCOUNT_ID=308535385114 if [ "$AWS_ACCOUNT_ID" != "$META_AWS_ACCOUNT_ID" ] ; then aws ecr get-login-password --region "$AWS_DEFAULT_REGION" | docker login --username AWS \ --password-stdin "$META_AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com" fi 2025-08-14T21:25:18.5012303Z polling_interval_seconds: 1 2025-08-14T21:25:18.5012503Z warning_on_retry: true 2025-08-14T21:25:18.5012690Z continue_on_error: false 2025-08-14T21:25:18.5012868Z env: 2025-08-14T21:25:18.5013030Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:25:18.5013227Z AWS_RETRY_MODE: standard 2025-08-14T21:25:18.5013402Z AWS_MAX_ATTEMPTS: 5 2025-08-14T21:25:18.5013589Z AWS_DEFAULT_REGION: us-east-1 2025-08-14T21:25:18.5013782Z ##[endgroup] 2025-08-14T21:25:19.5388691Z WARNING! Your password will be stored unencrypted in /home/ec2-user/.docker/config.json. 2025-08-14T21:25:19.5389543Z Configure a credential helper to remove this warning. See 2025-08-14T21:25:19.5389960Z https://docs.docker.com/engine/reference/commandline/login/#credentials-store 2025-08-14T21:25:19.5390216Z 2025-08-14T21:25:19.5390293Z Login Succeeded 2025-08-14T21:25:19.5756462Z Command completed after 1 attempt(s). 2025-08-14T21:25:19.5815127Z ##[group]Run env | grep '^GITHUB' >> "/tmp/github_env_${GITHUB_RUN_ID}" 2025-08-14T21:25:19.5815485Z env | grep '^GITHUB' >> "/tmp/github_env_${GITHUB_RUN_ID}" 2025-08-14T21:25:19.5815772Z env | grep '^CI' >> "/tmp/github_env_${GITHUB_RUN_ID}" 2025-08-14T21:25:19.5822250Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:25:19.5822513Z env: 2025-08-14T21:25:19.5822685Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:25:19.5822867Z ##[endgroup] 2025-08-14T21:25:19.5909950Z ##[group]Run # ignore expansion of "docker ps -q" since it could be empty 2025-08-14T21:25:19.5910332Z # ignore expansion of "docker ps -q" since it could be empty 2025-08-14T21:25:19.5910606Z # shellcheck disable=SC2046 2025-08-14T21:25:19.5910855Z docker stop $(docker ps -q) || true 2025-08-14T21:25:19.5911099Z # Prune all of the docker images 2025-08-14T21:25:19.5911315Z docker system prune -af 2025-08-14T21:25:19.5915818Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:25:19.5916044Z env: 2025-08-14T21:25:19.5916199Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:25:19.5916375Z ##[endgroup] 2025-08-14T21:25:19.6397362Z "docker stop" requires at least 1 argument. 2025-08-14T21:25:19.6402041Z See 'docker stop --help'. 2025-08-14T21:25:19.6404299Z 2025-08-14T21:25:19.6404645Z Usage: docker stop [OPTIONS] CONTAINER [CONTAINER...] 2025-08-14T21:25:19.6404867Z 2025-08-14T21:25:19.6404956Z Stop one or more running containers 2025-08-14T21:25:19.6630140Z Total reclaimed space: 0B 2025-08-14T21:25:19.6667893Z ##[group]Run set +e 2025-08-14T21:25:19.6668148Z set +e 2025-08-14T21:25:19.6668318Z set -x 2025-08-14T21:25:19.6668474Z  2025-08-14T21:25:19.6668642Z PT_DOMAIN=download.pytorch.org 2025-08-14T21:25:19.6669006Z # TODO: Flaky access to download.pytorch.org https://github.com/pytorch/pytorch/issues/100400, 2025-08-14T21:25:19.6669455Z # cleaning this up once the issue is fixed. There are more than one resolved IP here, the last 2025-08-14T21:25:19.6669790Z # one is returned at random 2025-08-14T21:25:19.6670041Z RESOLVED_IP=$(dig -4 +short "${PT_DOMAIN}" | tail -n1) 2025-08-14T21:25:19.6670282Z  2025-08-14T21:25:19.6670561Z if [ -z "${RESOLVED_IP}" ]; then 2025-08-14T21:25:19.6670836Z  echo "Couldn't resolve ${PT_DOMAIN}, retrying with Google DNS..." 2025-08-14T21:25:19.6671155Z  RESOLVED_IP=$(dig -4 +short "${PT_DOMAIN}" @8.8.8.8 | tail -n1) 2025-08-14T21:25:19.6671405Z  2025-08-14T21:25:19.6671574Z  if [ -z "${RESOLVED_IP}" ]; then 2025-08-14T21:25:19.6671823Z  echo "Couldn't resolve ${PT_DOMAIN}, exiting..." 2025-08-14T21:25:19.6672060Z  exit 1 2025-08-14T21:25:19.6672230Z  fi 2025-08-14T21:25:19.6672373Z fi 2025-08-14T21:25:19.6672520Z  2025-08-14T21:25:19.6672697Z if grep -r "${PT_DOMAIN}" /etc/hosts; then 2025-08-14T21:25:19.6672929Z  # Clean up any old records first 2025-08-14T21:25:19.6673162Z  sudo sed -i "/${PT_DOMAIN}/d" /etc/hosts 2025-08-14T21:25:19.6673367Z fi 2025-08-14T21:25:19.6673510Z  2025-08-14T21:25:19.6673712Z echo "${RESOLVED_IP} ${PT_DOMAIN}" | sudo tee -a /etc/hosts 2025-08-14T21:25:19.6673962Z cat /etc/hosts 2025-08-14T21:25:19.6678473Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:25:19.6678701Z env: 2025-08-14T21:25:19.6678851Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:25:19.6679023Z ##[endgroup] 2025-08-14T21:25:19.6700476Z + PT_DOMAIN=download.pytorch.org 2025-08-14T21:25:19.6708562Z ++ tail -n1 2025-08-14T21:25:19.6708986Z ++ dig -4 +short download.pytorch.org 2025-08-14T21:25:19.7057671Z + RESOLVED_IP=18.160.10.28 2025-08-14T21:25:19.7057972Z + '[' -z 18.160.10.28 ']' 2025-08-14T21:25:19.7058207Z + grep -r download.pytorch.org /etc/hosts 2025-08-14T21:25:19.7082984Z + echo '18.160.10.28 download.pytorch.org' 2025-08-14T21:25:19.7087556Z + sudo tee -a /etc/hosts 2025-08-14T21:25:19.9640331Z 18.160.10.28 download.pytorch.org 2025-08-14T21:25:19.9674705Z + cat /etc/hosts 2025-08-14T21:25:19.9685679Z 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 2025-08-14T21:25:19.9693374Z ::1 localhost6 localhost6.localdomain6 2025-08-14T21:25:19.9693661Z 18.160.10.28 download.pytorch.org 2025-08-14T21:25:19.9796026Z ##[group]Run pytorch/test-infra/.github/actions/calculate-docker-image@main 2025-08-14T21:25:19.9796350Z with: 2025-08-14T21:25:19.9796901Z docker-image-name: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T21:25:19.9797518Z use-custom-docker-registry: true 2025-08-14T21:25:19.9797732Z docker-build-dir: .ci/docker 2025-08-14T21:25:19.9797933Z docker-build-script: ./build.sh 2025-08-14T21:25:19.9798141Z working-directory: . 2025-08-14T21:25:19.9798374Z docker-registry: 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-08-14T21:25:19.9798631Z force-push: false 2025-08-14T21:25:19.9798785Z env: 2025-08-14T21:25:19.9798941Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:25:19.9799123Z ##[endgroup] 2025-08-14T21:25:19.9811818Z ##[group]Run set -ex 2025-08-14T21:25:19.9812038Z set -ex 2025-08-14T21:25:19.9812187Z  2025-08-14T21:25:19.9812466Z # If the docker build directory or the build script doesn't exist, the action will 2025-08-14T21:25:19.9812856Z # gracefully return the docker image name as it is. Pulling docker image in Linux 2025-08-14T21:25:19.9813189Z # job could then download the pre-built image as usual 2025-08-14T21:25:19.9813589Z if [[ -d "${DOCKER_BUILD_DIR}" ]] && [[ -f "${DOCKER_BUILD_DIR}/${DOCKER_BUILD_SCRIPT}" ]] && [[ "${USE_CUSTOM_DOCKER_REGISTRY}" == "true" ]]; then 2025-08-14T21:25:19.9813959Z  echo "skip=false" >> "${GITHUB_OUTPUT}" 2025-08-14T21:25:19.9814167Z else 2025-08-14T21:25:19.9814356Z  echo "skip=true" >> "${GITHUB_OUTPUT}" 2025-08-14T21:25:19.9814643Z  echo "docker-image=${DOCKER_IMAGE_NAME}" >> "${GITHUB_OUTPUT}" 2025-08-14T21:25:19.9814916Z  2025-08-14T21:25:19.9815250Z  echo "Not using custom ECR registry. Either it was not requested or there is no Docker build script in the ${REPO_NAME} repo..." 2025-08-14T21:25:19.9815643Z  exit 0 2025-08-14T21:25:19.9815799Z fi 2025-08-14T21:25:19.9815943Z  2025-08-14T21:25:19.9816167Z if [[ "${DOCKER_IMAGE_NAME}" == *"${DOCKER_REGISTRY}/${REPO_NAME}"* ]]; then 2025-08-14T21:25:19.9816534Z  # The docker image name already includes the ECR prefix and tag, so we can just 2025-08-14T21:25:19.9816862Z  # use it as it is, but first let's extract the tag 2025-08-14T21:25:19.9817156Z  DOCKER_TAG=$(echo "${DOCKER_IMAGE_NAME}" | awk -F '[:,]' '{print $2}') 2025-08-14T21:25:19.9817468Z  echo "docker-tag=${DOCKER_TAG}" >> "${GITHUB_OUTPUT}" 2025-08-14T21:25:19.9817768Z  echo "docker-image=${DOCKER_IMAGE_NAME}" >> "${GITHUB_OUTPUT}" 2025-08-14T21:25:19.9818017Z else 2025-08-14T21:25:19.9818194Z  if [[ "${DOCKER_IMAGE_NAME}" == *:* ]]; then 2025-08-14T21:25:19.9818438Z  CUSTOM_TAG_PREFIX=${DOCKER_IMAGE_NAME#*:} 2025-08-14T21:25:19.9818697Z  DOCKER_IMAGE_NAME=${DOCKER_IMAGE_NAME%%:*} 2025-08-14T21:25:19.9818908Z  fi 2025-08-14T21:25:19.9819196Z  DOCKER_TAG=${CUSTOM_TAG_PREFIX:+${CUSTOM_TAG_PREFIX}-}$(git rev-parse HEAD:"${DOCKER_BUILD_DIR}") 2025-08-14T21:25:19.9819579Z  echo "docker-tag=${DOCKER_TAG}" >> "${GITHUB_OUTPUT}" 2025-08-14T21:25:19.9820113Z  echo "docker-image=${DOCKER_REGISTRY}/${REPO_NAME}/${DOCKER_IMAGE_NAME}:${DOCKER_TAG}" >> "${GITHUB_OUTPUT}" 2025-08-14T21:25:19.9820533Z  echo "custom-tag-prefix=${CUSTOM_TAG_PREFIX}" >> "${GITHUB_OUTPUT}" 2025-08-14T21:25:19.9820957Z fi 2025-08-14T21:25:19.9828944Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:25:19.9829191Z env: 2025-08-14T21:25:19.9829358Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:25:19.9829554Z REPO_NAME: pytorch 2025-08-14T21:25:19.9830206Z DOCKER_IMAGE_NAME: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T21:25:19.9830755Z DOCKER_BUILD_DIR: .ci/docker 2025-08-14T21:25:19.9830957Z DOCKER_BUILD_SCRIPT: ./build.sh 2025-08-14T21:25:19.9831210Z DOCKER_REGISTRY: 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-08-14T21:25:19.9831469Z USE_CUSTOM_DOCKER_REGISTRY: true 2025-08-14T21:25:19.9831673Z CUSTOM_TAG_PREFIX: 2025-08-14T21:25:19.9831846Z ##[endgroup] 2025-08-14T21:25:19.9856261Z + [[ -d .ci/docker ]] 2025-08-14T21:25:19.9856699Z + [[ -f .ci/docker/./build.sh ]] 2025-08-14T21:25:19.9857048Z + [[ true == \t\r\u\e ]] 2025-08-14T21:25:19.9857314Z + echo skip=false 2025-08-14T21:25:19.9858617Z + [[ 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe == *\3\0\8\5\3\5\3\8\5\1\1\4\.\d\k\r\.\e\c\r\.\u\s\-\e\a\s\t\-\1\.\a\m\a\z\o\n\a\w\s\.\c\o\m\/\p\y\t\o\r\c\h* ]] 2025-08-14T21:25:19.9867510Z ++ echo 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T21:25:19.9872326Z ++ awk -F '[:,]' '{print $2}' 2025-08-14T21:25:19.9894924Z + DOCKER_TAG=pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T21:25:19.9895611Z + echo docker-tag=pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T21:25:19.9896376Z + echo docker-image=308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T21:25:19.9914275Z ##[group]Run set +e 2025-08-14T21:25:19.9914487Z set +e 2025-08-14T21:25:19.9914648Z set -x 2025-08-14T21:25:19.9914801Z  2025-08-14T21:25:19.9914938Z login() { 2025-08-14T21:25:19.9915238Z  aws ecr get-login-password --region us-east-1 | docker login -u AWS --password-stdin "$1" 2025-08-14T21:25:19.9915555Z } 2025-08-14T21:25:19.9915691Z  2025-08-14T21:25:19.9915832Z retry () { 2025-08-14T21:25:19.9916013Z  $* || (sleep 1 && $*) || (sleep 2 && $*) 2025-08-14T21:25:19.9916206Z } 2025-08-14T21:25:19.9916344Z  2025-08-14T21:25:19.9916498Z retry login "${DOCKER_REGISTRY}" 2025-08-14T21:25:19.9916686Z  2025-08-14T21:25:19.9916826Z START_TIME=$(date +%s) 2025-08-14T21:25:19.9917017Z # Wait up to 120 minutes 2025-08-14T21:25:19.9917257Z while [[ $(( $(date +%s) - 7200 )) -lt $START_TIME ]]; do 2025-08-14T21:25:19.9917540Z  # Check if image already exists, if it does then skip building it 2025-08-14T21:25:19.9917832Z  if docker manifest inspect "${DOCKER_IMAGE}"; then 2025-08-14T21:25:19.9918056Z  exit 0 2025-08-14T21:25:19.9918207Z  fi 2025-08-14T21:25:19.9918356Z  2025-08-14T21:25:19.9918600Z  # NB: This flag is used by Docker build workflow to push the image to ECR, so we can 2025-08-14T21:25:19.9918986Z  # use this to differentiate between the Docker build and regular build jobs. For the 2025-08-14T21:25:19.9919367Z  # latter, it will wait for the Docker images to become available before continuing 2025-08-14T21:25:19.9919674Z  if [ "${DOCKER_PUSH:-false}" == "true" ]; then 2025-08-14T21:25:19.9919923Z  # It's a Docker build job, let's build the image 2025-08-14T21:25:19.9920132Z  break 2025-08-14T21:25:19.9920370Z  else 2025-08-14T21:25:19.9920583Z  # It's a regular build job, wait for the image to become available 2025-08-14T21:25:19.9920826Z  sleep 300 2025-08-14T21:25:19.9920978Z  fi 2025-08-14T21:25:19.9921123Z done 2025-08-14T21:25:19.9921268Z  2025-08-14T21:25:19.9921480Z # NB: This part requires a full checkout. Otherwise, the merge base will 2025-08-14T21:25:19.9921900Z # be empty. The default action would be to continue rebuild the image 2025-08-14T21:25:19.9922203Z if [[ "$BASE_REVISION" = "$(git rev-parse HEAD)" ]]; then 2025-08-14T21:25:19.9922476Z  # if we're on the base branch then use the parent commit 2025-08-14T21:25:19.9922718Z  MERGE_BASE=$(git rev-parse HEAD~) 2025-08-14T21:25:19.9922913Z else 2025-08-14T21:25:19.9923121Z  # otherwise we're on a PR, so use the most recent base commit 2025-08-14T21:25:19.9923395Z  MERGE_BASE=$(git merge-base HEAD "$BASE_REVISION") 2025-08-14T21:25:19.9923617Z fi 2025-08-14T21:25:19.9923759Z  2025-08-14T21:25:19.9923917Z if [[ -z "${MERGE_BASE}" ]]; then 2025-08-14T21:25:19.9924135Z  echo "rebuild=true" >> "${GITHUB_OUTPUT}" 2025-08-14T21:25:19.9924335Z  2025-08-14T21:25:19.9924611Z  echo "Finding merge base only works with full checkout, please set fetch-depth to 0, continuing ..." 2025-08-14T21:25:19.9924913Z  exit 0 2025-08-14T21:25:19.9925062Z fi 2025-08-14T21:25:19.9925203Z  2025-08-14T21:25:19.9925388Z if ! git rev-parse "${MERGE_BASE}:${DOCKER_BUILD_DIR}"; then 2025-08-14T21:25:19.9925773Z  echo "Directory '${DOCKER_BUILD_DIR}' not found in commit $MERGE_BASE, you should rebase onto a more recent commit" 2025-08-14T21:25:19.9926099Z  exit 1 2025-08-14T21:25:19.9926246Z fi 2025-08-14T21:25:19.9926378Z  2025-08-14T21:25:19.9926612Z PREVIOUS_DOCKER_TAG=$(git rev-parse "${MERGE_BASE}:${DOCKER_BUILD_DIR}") 2025-08-14T21:25:19.9926989Z # If no image exists but the hash is the same as the previous hash then we should error out here 2025-08-14T21:25:19.9927322Z if [[ "${PREVIOUS_DOCKER_TAG}" == "${DOCKER_TAG}" ]]; then 2025-08-14T21:25:19.9927712Z  echo "WARNING: Something has gone wrong and the previous image isn't available for the merge-base of your branch" 2025-08-14T21:25:19.9928150Z  echo " Will re-build docker image to store in local cache, TTS may be longer" 2025-08-14T21:25:19.9928417Z fi 2025-08-14T21:25:19.9928551Z  2025-08-14T21:25:19.9928726Z echo "rebuild=true" >> "${GITHUB_OUTPUT}" 2025-08-14T21:25:19.9933380Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:25:19.9933619Z env: 2025-08-14T21:25:19.9933769Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:25:19.9933959Z DOCKER_BUILD_DIR: .ci/docker 2025-08-14T21:25:19.9934191Z BASE_REVISION: 1fc683cf17c8c673044538d10266c00f92987be2 2025-08-14T21:25:19.9934754Z DOCKER_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T21:25:19.9935469Z DOCKER_TAG: pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T21:25:19.9935911Z DOCKER_REGISTRY: 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-08-14T21:25:19.9936157Z DOCKER_PUSH: 2025-08-14T21:25:19.9936306Z ##[endgroup] 2025-08-14T21:25:19.9960472Z + retry login 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-08-14T21:25:19.9967295Z + login 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-08-14T21:25:19.9969185Z + aws ecr get-login-password --region us-east-1 2025-08-14T21:25:19.9969703Z + docker login -u AWS --password-stdin 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-08-14T21:25:20.4647992Z WARNING! Your password will be stored unencrypted in /home/ec2-user/.docker/config.json. 2025-08-14T21:25:20.4649139Z Configure a credential helper to remove this warning. See 2025-08-14T21:25:20.4649678Z https://docs.docker.com/engine/reference/commandline/login/#credentials-store 2025-08-14T21:25:20.4649939Z 2025-08-14T21:25:20.4650028Z Login Succeeded 2025-08-14T21:25:20.4670436Z ++ date +%s 2025-08-14T21:25:20.4679985Z + START_TIME=1755206720 2025-08-14T21:25:20.4683585Z ++ date +%s 2025-08-14T21:25:20.4691464Z + [[ 1755199520 -lt 1755206720 ]] 2025-08-14T21:25:20.4692092Z + docker manifest inspect 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T21:25:20.6898699Z { 2025-08-14T21:25:20.6899003Z "schemaVersion": 2, 2025-08-14T21:25:20.6900042Z "mediaType": "application/vnd.docker.distribution.manifest.v2+json", 2025-08-14T21:25:20.6900435Z "config": { 2025-08-14T21:25:20.6900704Z "mediaType": "application/vnd.docker.container.image.v1+json", 2025-08-14T21:25:20.6901087Z "size": 30151, 2025-08-14T21:25:20.6901401Z "digest": "sha256:0899ae453036ee7a91795ea95b1db61000579eeb74b140edab5976919ee64bbe" 2025-08-14T21:25:20.6901719Z }, 2025-08-14T21:25:20.6901879Z "layers": [ 2025-08-14T21:25:20.6902044Z { 2025-08-14T21:25:20.6902283Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6902589Z "size": 30448173, 2025-08-14T21:25:20.6902924Z "digest": "sha256:660ffc76f83b006444a5731b215acc2e35138d8be5cac8ed1ffd40f947117495" 2025-08-14T21:25:20.6903244Z }, 2025-08-14T21:25:20.6903390Z { 2025-08-14T21:25:20.6903634Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6903929Z "size": 1554, 2025-08-14T21:25:20.6904211Z "digest": "sha256:c7b4a852a45516e27a9256df90878663d770f96d271d6155d43be78cc5225eef" 2025-08-14T21:25:20.6904540Z }, 2025-08-14T21:25:20.6904688Z { 2025-08-14T21:25:20.6904913Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6905202Z "size": 313280151, 2025-08-14T21:25:20.6905494Z "digest": "sha256:e5a28988c8932eb5797557621582a064ce48651dbb5eaed379e9978535daccb9" 2025-08-14T21:25:20.6905804Z }, 2025-08-14T21:25:20.6905957Z { 2025-08-14T21:25:20.6906193Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6906477Z "size": 793, 2025-08-14T21:25:20.6906774Z "digest": "sha256:76a69b57b6837bef07dbc1b481cf28a62dfd7c7063219d9f6e0d0d63067653c7" 2025-08-14T21:25:20.6907093Z }, 2025-08-14T21:25:20.6907243Z { 2025-08-14T21:25:20.6907472Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6907760Z "size": 106, 2025-08-14T21:25:20.6908057Z "digest": "sha256:5c785dcb4cdbf1f2ceffe4d1d8e85d73225a56d0236e7ed6e36a95c836996052" 2025-08-14T21:25:20.6908375Z }, 2025-08-14T21:25:20.6908522Z { 2025-08-14T21:25:20.6908754Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6909056Z "size": 704, 2025-08-14T21:25:20.6909341Z "digest": "sha256:836ab08052e8eb2bae68e69ae086fd23a5f04a8491c320718ab47f84f03aebb1" 2025-08-14T21:25:20.6909673Z }, 2025-08-14T21:25:20.6909818Z { 2025-08-14T21:25:20.6910053Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6910348Z "size": 1217, 2025-08-14T21:25:20.6910653Z "digest": "sha256:53b11c77468cbefca210560f7d8be8e58f9eeb415e096ab0c3fb0277f0b41caf" 2025-08-14T21:25:20.6910976Z }, 2025-08-14T21:25:20.6911123Z { 2025-08-14T21:25:20.6911352Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6911629Z "size": 485, 2025-08-14T21:25:20.6911918Z "digest": "sha256:e97311a6a967664cbe10c5027a1ec60c514caa9a1160167d8363088fd1f9fe09" 2025-08-14T21:25:20.6912230Z }, 2025-08-14T21:25:20.6912369Z { 2025-08-14T21:25:20.6912698Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6912992Z "size": 110343699, 2025-08-14T21:25:20.6913700Z "digest": "sha256:2c414689d31dc46a22fe02d4f43699f528cc1c02fb505824768383fa0bbf1c74" 2025-08-14T21:25:20.6914050Z }, 2025-08-14T21:25:20.6914206Z { 2025-08-14T21:25:20.6914435Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6914728Z "size": 4817, 2025-08-14T21:25:20.6915033Z "digest": "sha256:6d89b5f065d59e4abcaa9b5ff3bf0afded2394d493d2df0f7babf7154f7548e0" 2025-08-14T21:25:20.6915472Z }, 2025-08-14T21:25:20.6915618Z { 2025-08-14T21:25:20.6915865Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6916245Z "size": 1709, 2025-08-14T21:25:20.6916568Z "digest": "sha256:5a5cc76ada432cccf7d18e0eb79379afb95deaaa7afec482406267924d291ae4" 2025-08-14T21:25:20.6916895Z }, 2025-08-14T21:25:20.6917041Z { 2025-08-14T21:25:20.6917273Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6917616Z "size": 724, 2025-08-14T21:25:20.6917913Z "digest": "sha256:fc6b37d40530f2c5339430321eab67ae1e2e87e997587c7bc8c41504464208f9" 2025-08-14T21:25:20.6918234Z }, 2025-08-14T21:25:20.6918368Z { 2025-08-14T21:25:20.6918599Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6918893Z "size": 542, 2025-08-14T21:25:20.6919165Z "digest": "sha256:2e16579078600b91216fd14aca1e0ce0f9d1801b230689dd309980e8d2783935" 2025-08-14T21:25:20.6919474Z }, 2025-08-14T21:25:20.6919621Z { 2025-08-14T21:25:20.6919849Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6920141Z "size": 3397512507, 2025-08-14T21:25:20.6920445Z "digest": "sha256:7b92d7a4b8c766d7b7873aa33088e171fb44a8e968645e4b31dfe6de2968aead" 2025-08-14T21:25:20.6920759Z }, 2025-08-14T21:25:20.6920895Z { 2025-08-14T21:25:20.6921124Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6921407Z "size": 32, 2025-08-14T21:25:20.6921689Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-08-14T21:25:20.6922015Z }, 2025-08-14T21:25:20.6922160Z { 2025-08-14T21:25:20.6922383Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6922669Z "size": 380, 2025-08-14T21:25:20.6922956Z "digest": "sha256:d6226eb61f823984003d5ac28f4d66fec9b27baf5d54a9513286483f5912cd88" 2025-08-14T21:25:20.6923262Z }, 2025-08-14T21:25:20.6923408Z { 2025-08-14T21:25:20.6923639Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6923919Z "size": 234681, 2025-08-14T21:25:20.6924214Z "digest": "sha256:83c70f4266a6ee5f8f44a88d4cb951382f6c960323b8250046bddc080e62268b" 2025-08-14T21:25:20.6924530Z }, 2025-08-14T21:25:20.6924676Z { 2025-08-14T21:25:20.6924897Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6925192Z "size": 231, 2025-08-14T21:25:20.6925475Z "digest": "sha256:60c725d21861c24c417efe3a5474414ba04f0f49c78c6d6451478ab9e45469ec" 2025-08-14T21:25:20.6925786Z }, 2025-08-14T21:25:20.6925932Z { 2025-08-14T21:25:20.6926165Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6926445Z "size": 4464546, 2025-08-14T21:25:20.6926740Z "digest": "sha256:a504e76e66a49926b4ea837b7a7ff3c842a27b2caaa4d80cf5057a1e55293666" 2025-08-14T21:25:20.6927061Z }, 2025-08-14T21:25:20.6927197Z { 2025-08-14T21:25:20.6927431Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6927722Z "size": 1864, 2025-08-14T21:25:20.6928015Z "digest": "sha256:fc1c200a4f77face2af0146f9b03ad04f31fe06fec216473ffd2ebd538cde056" 2025-08-14T21:25:20.6928344Z }, 2025-08-14T21:25:20.6928489Z { 2025-08-14T21:25:20.6928726Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6929005Z "size": 475, 2025-08-14T21:25:20.6929289Z "digest": "sha256:43273c22704f81f162741d2039015f745273eee1d1fdec47be35c9b2a90dcc5b" 2025-08-14T21:25:20.6929595Z }, 2025-08-14T21:25:20.6929845Z { 2025-08-14T21:25:20.6930081Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6930355Z "size": 178, 2025-08-14T21:25:20.6930634Z "digest": "sha256:89df389d042adbd7621a94d36b6e3db60ff6c559efb95c6fcc11b8afd42f0599" 2025-08-14T21:25:20.6930954Z }, 2025-08-14T21:25:20.6931100Z { 2025-08-14T21:25:20.6931323Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6931652Z "size": 586, 2025-08-14T21:25:20.6931935Z "digest": "sha256:684349f50d9456597026ee5c1bd890c51d1e498614f367adf03329c5227add79" 2025-08-14T21:25:20.6932236Z }, 2025-08-14T21:25:20.6932370Z { 2025-08-14T21:25:20.6932600Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6932872Z "size": 218, 2025-08-14T21:25:20.6933152Z "digest": "sha256:21d0eae87fb3ac753b3f0e91ae638360d23922d4cd119410a5a1b97bbe0ca435" 2025-08-14T21:25:20.6933460Z }, 2025-08-14T21:25:20.6933738Z { 2025-08-14T21:25:20.6934020Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6934304Z "size": 802, 2025-08-14T21:25:20.6934594Z "digest": "sha256:c9c2b424b8e08d943dc259a3796d66eede3a1e93a6460df5db132c0036d3d6af" 2025-08-14T21:25:20.6934909Z }, 2025-08-14T21:25:20.6935055Z { 2025-08-14T21:25:20.6935287Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6935556Z "size": 32, 2025-08-14T21:25:20.6935854Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-08-14T21:25:20.6936172Z }, 2025-08-14T21:25:20.6936318Z { 2025-08-14T21:25:20.6936542Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6936825Z "size": 104, 2025-08-14T21:25:20.6937109Z "digest": "sha256:98dda28f339592e3ca6d589d551e69b8314f2b7fc2a1544eacc1b3c2d3378521" 2025-08-14T21:25:20.6937417Z }, 2025-08-14T21:25:20.6937564Z { 2025-08-14T21:25:20.6937796Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6938072Z "size": 1496, 2025-08-14T21:25:20.6938365Z "digest": "sha256:acf5babd87f23aa905883eb434073e9a00ff41679134f2f4827dd86949f5a9d9" 2025-08-14T21:25:20.6938681Z }, 2025-08-14T21:25:20.6938818Z { 2025-08-14T21:25:20.6939047Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6939328Z "size": 453555614, 2025-08-14T21:25:20.6939618Z "digest": "sha256:7c5050d8408d3c4f9f5e8f2cb215245473bfc2f1510fe5ee01c2a6c505068b5a" 2025-08-14T21:25:20.6940007Z }, 2025-08-14T21:25:20.6940160Z { 2025-08-14T21:25:20.6940393Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6940674Z "size": 163, 2025-08-14T21:25:20.6940964Z "digest": "sha256:7ddd14e2b548b9ae6e216a081bb20116434aacbbe571c99b40e60fb2fde22a2a" 2025-08-14T21:25:20.6941284Z }, 2025-08-14T21:25:20.6941421Z { 2025-08-14T21:25:20.6941649Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6942209Z "size": 347, 2025-08-14T21:25:20.6942489Z "digest": "sha256:4ba8e7a736c8199931fd7ff9931a5f17b7b931d0383a3e158f1b12b191a1d250" 2025-08-14T21:25:20.6942806Z }, 2025-08-14T21:25:20.6942957Z { 2025-08-14T21:25:20.6943180Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6943475Z "size": 32, 2025-08-14T21:25:20.6943762Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-08-14T21:25:20.6944089Z }, 2025-08-14T21:25:20.6944231Z { 2025-08-14T21:25:20.6944460Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6944744Z "size": 106, 2025-08-14T21:25:20.6945026Z "digest": "sha256:907c320fee2f90da0cf5028c90a0ef49a137518baf79b483dcf7f22d5a0a497d" 2025-08-14T21:25:20.6945344Z }, 2025-08-14T21:25:20.6945489Z { 2025-08-14T21:25:20.6945709Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6945987Z "size": 425, 2025-08-14T21:25:20.6946413Z "digest": "sha256:18c4ed1ec491095788e352ae018afd84de0f251fbcfb8f74d5d893e1e9ab196d" 2025-08-14T21:25:20.6946724Z }, 2025-08-14T21:25:20.6946871Z { 2025-08-14T21:25:20.6947104Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6947385Z "size": 19308711, 2025-08-14T21:25:20.6947689Z "digest": "sha256:d7618c2df6cdb4bbf3d9870ba2d089094ac46c429b573d9adb94411fac54cfca" 2025-08-14T21:25:20.6948009Z }, 2025-08-14T21:25:20.6948228Z { 2025-08-14T21:25:20.6948451Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6948732Z "size": 108, 2025-08-14T21:25:20.6949018Z "digest": "sha256:b7bdd9a6f789ba483a46c92e5d373638850f33e88b1baa4bbe67e1c6a09cb7d0" 2025-08-14T21:25:20.6949335Z }, 2025-08-14T21:25:20.6949483Z { 2025-08-14T21:25:20.6949708Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6949978Z "size": 691, 2025-08-14T21:25:20.6950266Z "digest": "sha256:6738ba83282e002d92bff3d2b4951e3c1a67f5ec2c1bad2fd780c2f5d444748f" 2025-08-14T21:25:20.6950606Z }, 2025-08-14T21:25:20.6950742Z { 2025-08-14T21:25:20.6950972Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6951249Z "size": 724, 2025-08-14T21:25:20.6951523Z "digest": "sha256:fc6b37d40530f2c5339430321eab67ae1e2e87e997587c7bc8c41504464208f9" 2025-08-14T21:25:20.6951830Z }, 2025-08-14T21:25:20.6951975Z { 2025-08-14T21:25:20.6952204Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6952479Z "size": 116, 2025-08-14T21:25:20.6952765Z "digest": "sha256:dfb0f24886393e1d394f1f433dc9346026679dafd7a60c3a93de17d94078c1ca" 2025-08-14T21:25:20.6953073Z }, 2025-08-14T21:25:20.6953210Z { 2025-08-14T21:25:20.6953439Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6953715Z "size": 136, 2025-08-14T21:25:20.6953995Z "digest": "sha256:dc833b0762f2e144670a660f6b7ce62cec71a5fdd24df4e67b5c6173d5834451" 2025-08-14T21:25:20.6954310Z }, 2025-08-14T21:25:20.6954456Z { 2025-08-14T21:25:20.6954673Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6954952Z "size": 139, 2025-08-14T21:25:20.6955235Z "digest": "sha256:8827df8ca2da347e0032d1bff3b0312437f711c5d0b5f2164f8a60c3368a9827" 2025-08-14T21:25:20.6955548Z }, 2025-08-14T21:25:20.6955685Z { 2025-08-14T21:25:20.6955920Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6956213Z "size": 17672683360, 2025-08-14T21:25:20.6956515Z "digest": "sha256:fac8f3bd0f85eaffb43df539683dc3d861c370e583623253559fd7a1f5b00229" 2025-08-14T21:25:20.6956832Z }, 2025-08-14T21:25:20.6956974Z { 2025-08-14T21:25:20.6957188Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6957455Z "size": 214, 2025-08-14T21:25:20.6957731Z "digest": "sha256:d7cf7f140df32761610e1d58686db7f7c66a85affa4bb4b9d3c245e232443a8f" 2025-08-14T21:25:20.6958027Z }, 2025-08-14T21:25:20.6958171Z { 2025-08-14T21:25:20.6958393Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6958669Z "size": 272992162, 2025-08-14T21:25:20.6958966Z "digest": "sha256:733eedc8da8d8e7bd5a85a58d3d7818f14ed9a4fdf2dbd587038bb7725fbb9f7" 2025-08-14T21:25:20.6959276Z }, 2025-08-14T21:25:20.6959414Z { 2025-08-14T21:25:20.6959628Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6959910Z "size": 6435582332, 2025-08-14T21:25:20.6960195Z "digest": "sha256:5b092eb06909a2ea8906849acac588a10864da349670d65c0bfea342187edba2" 2025-08-14T21:25:20.6960489Z }, 2025-08-14T21:25:20.6960635Z { 2025-08-14T21:25:20.6960856Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6961135Z "size": 129, 2025-08-14T21:25:20.6961407Z "digest": "sha256:bc596103109216e154006085503386753b0b114b5900bf44758cdff324df5504" 2025-08-14T21:25:20.6961718Z }, 2025-08-14T21:25:20.6961855Z { 2025-08-14T21:25:20.6962163Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6962449Z "size": 776, 2025-08-14T21:25:20.6962741Z "digest": "sha256:0531cc34c12ab9127f1858c4cf365bb3a02bc31e8d6df5eabba2e1b6ef026ccf" 2025-08-14T21:25:20.6963052Z }, 2025-08-14T21:25:20.6963196Z { 2025-08-14T21:25:20.6963428Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6963766Z "size": 724, 2025-08-14T21:25:20.6964050Z "digest": "sha256:fc6b37d40530f2c5339430321eab67ae1e2e87e997587c7bc8c41504464208f9" 2025-08-14T21:25:20.6964358Z }, 2025-08-14T21:25:20.6964501Z { 2025-08-14T21:25:20.6964733Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6965021Z "size": 141, 2025-08-14T21:25:20.6965301Z "digest": "sha256:38c303d3b62eb463762816db04062a480014a6f3c9754386f3e83ba331ab4d1d" 2025-08-14T21:25:20.6965601Z }, 2025-08-14T21:25:20.6965747Z { 2025-08-14T21:25:20.6965963Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6966246Z "size": 32, 2025-08-14T21:25:20.6966538Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-08-14T21:25:20.6966852Z }, 2025-08-14T21:25:20.6966987Z { 2025-08-14T21:25:20.6967223Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6967508Z "size": 160, 2025-08-14T21:25:20.6967790Z "digest": "sha256:e06d15594a2a76995baebbce7032946ff9f94e281246fbc3f8ab19d8bcc38b81" 2025-08-14T21:25:20.6968101Z }, 2025-08-14T21:25:20.6968243Z { 2025-08-14T21:25:20.6968461Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6968740Z "size": 1010, 2025-08-14T21:25:20.6969040Z "digest": "sha256:0e55deb5cb38fd36b600183f7d86eaca0dabc04d2ff4d49ec2266ee3329edc4a" 2025-08-14T21:25:20.6969344Z }, 2025-08-14T21:25:20.6969484Z { 2025-08-14T21:25:20.6969713Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6969990Z "size": 724, 2025-08-14T21:25:20.6970282Z "digest": "sha256:fc6b37d40530f2c5339430321eab67ae1e2e87e997587c7bc8c41504464208f9" 2025-08-14T21:25:20.6970589Z }, 2025-08-14T21:25:20.6970734Z { 2025-08-14T21:25:20.6970957Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6971232Z "size": 134, 2025-08-14T21:25:20.6971524Z "digest": "sha256:4a53d66dce071bb7416414aa1adbc3e4a59003300c0d42038612fabdeb5a1b01" 2025-08-14T21:25:20.6971834Z }, 2025-08-14T21:25:20.6971994Z { 2025-08-14T21:25:20.6972216Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6972480Z "size": 32, 2025-08-14T21:25:20.6972763Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-08-14T21:25:20.6973073Z }, 2025-08-14T21:25:20.6973212Z { 2025-08-14T21:25:20.6973439Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6973717Z "size": 159, 2025-08-14T21:25:20.6973992Z "digest": "sha256:1519daa051b8b80e04125f2f2215dc412dcdbb9502711925e97aeccbda069eaf" 2025-08-14T21:25:20.6974320Z }, 2025-08-14T21:25:20.6974465Z { 2025-08-14T21:25:20.6974692Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6974959Z "size": 1371, 2025-08-14T21:25:20.6975250Z "digest": "sha256:381ed91d2119f078fbba19102a65befc4cb242f8cf47a11fb6f76ea424690692" 2025-08-14T21:25:20.6975573Z }, 2025-08-14T21:25:20.6975709Z { 2025-08-14T21:25:20.6975937Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6976213Z "size": 32, 2025-08-14T21:25:20.6976491Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-08-14T21:25:20.6976807Z }, 2025-08-14T21:25:20.6976951Z { 2025-08-14T21:25:20.6977175Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6977456Z "size": 137, 2025-08-14T21:25:20.6977742Z "digest": "sha256:c6b0a01a96dd479640297d4b012031ffc1bd9fc0daf61d86058f9b675c0a0705" 2025-08-14T21:25:20.6978130Z }, 2025-08-14T21:25:20.6978267Z { 2025-08-14T21:25:20.6978496Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6978783Z "size": 380, 2025-08-14T21:25:20.6979067Z "digest": "sha256:62df6413daeefebde04dcc401134734952e4ea37fc85ff23c89cb9b4fbd45155" 2025-08-14T21:25:20.6979382Z }, 2025-08-14T21:25:20.6979525Z { 2025-08-14T21:25:20.6979963Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6980265Z "size": 32, 2025-08-14T21:25:20.6980555Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-08-14T21:25:20.6980875Z }, 2025-08-14T21:25:20.6981021Z { 2025-08-14T21:25:20.6981254Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6981530Z "size": 104, 2025-08-14T21:25:20.6981818Z "digest": "sha256:7a18bc2a6881b76a6f591c98dafb47e44d903f7a905f7eba0fc3aedb5c90fff7" 2025-08-14T21:25:20.6982138Z }, 2025-08-14T21:25:20.6982283Z { 2025-08-14T21:25:20.6982500Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6982779Z "size": 407, 2025-08-14T21:25:20.6983062Z "digest": "sha256:93359cd58a8cece344fd4291b27647e57761c9399bb54bb0c18149c12af5f66a" 2025-08-14T21:25:20.6983363Z }, 2025-08-14T21:25:20.6983510Z { 2025-08-14T21:25:20.6983742Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6984008Z "size": 32, 2025-08-14T21:25:20.6984302Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-08-14T21:25:20.6984616Z }, 2025-08-14T21:25:20.6984752Z { 2025-08-14T21:25:20.6984988Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6985275Z "size": 109, 2025-08-14T21:25:20.6985556Z "digest": "sha256:c35ba0a1f353d6894c914a4bfbea9a2c9b8ac1b526af64d34cbe9a12bd83c78e" 2025-08-14T21:25:20.6985880Z }, 2025-08-14T21:25:20.6986026Z { 2025-08-14T21:25:20.6986258Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6986527Z "size": 1896, 2025-08-14T21:25:20.6986816Z "digest": "sha256:dcf1e01c98d6a6f72674d79a4e8e4047b54796576cd06ad682c225a92820a8f5" 2025-08-14T21:25:20.6987129Z }, 2025-08-14T21:25:20.6987267Z { 2025-08-14T21:25:20.6987494Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6987775Z "size": 242635753, 2025-08-14T21:25:20.6988069Z "digest": "sha256:bad0564f61fdf377e3ae31f6fec0ec28b6922da0b9db28408b55b8e97ff1ea51" 2025-08-14T21:25:20.6988389Z }, 2025-08-14T21:25:20.6988534Z { 2025-08-14T21:25:20.6988754Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6989049Z "size": 106, 2025-08-14T21:25:20.6989339Z "digest": "sha256:539ded9057364aade7abe23ab908d2caf53966a186734aa58ae84a56bee659eb" 2025-08-14T21:25:20.6989663Z }, 2025-08-14T21:25:20.6989806Z { 2025-08-14T21:25:20.6990035Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6990314Z "size": 163, 2025-08-14T21:25:20.6990582Z "digest": "sha256:28d482062637d32514edfc447913e98745d7c13d2f277531e64ffcf090ae6d92" 2025-08-14T21:25:20.6990894Z }, 2025-08-14T21:25:20.6991040Z { 2025-08-14T21:25:20.6991254Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6991527Z "size": 7943, 2025-08-14T21:25:20.6991805Z "digest": "sha256:3245316ff51b50b27da4ef7279733c92f76cc652b3fce3877c0e3d510430e8b3" 2025-08-14T21:25:20.6992100Z }, 2025-08-14T21:25:20.6992241Z { 2025-08-14T21:25:20.6992461Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6992724Z "size": 8073, 2025-08-14T21:25:20.6993002Z "digest": "sha256:b53167d1a6df0e4b67d637d073150dff1fb87a823864c0c98d77c15e56babc24" 2025-08-14T21:25:20.6993304Z }, 2025-08-14T21:25:20.6993443Z { 2025-08-14T21:25:20.6993653Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6993990Z "size": 303, 2025-08-14T21:25:20.6994257Z "digest": "sha256:7f5277f691672469f431fd90a8c2bb702c6c68333f6be2cff868f00e416c5a1a" 2025-08-14T21:25:20.6994547Z }, 2025-08-14T21:25:20.6994688Z { 2025-08-14T21:25:20.6994909Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6995168Z "size": 32, 2025-08-14T21:25:20.6995493Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-08-14T21:25:20.6995808Z }, 2025-08-14T21:25:20.6995943Z { 2025-08-14T21:25:20.6996173Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6996446Z "size": 108, 2025-08-14T21:25:20.6996718Z "digest": "sha256:23dff10cdaa5b1e9c7250f0c58a6279f104b35408281e951bfe9983f97e3d9ed" 2025-08-14T21:25:20.6997022Z }, 2025-08-14T21:25:20.6997164Z { 2025-08-14T21:25:20.6997386Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6997654Z "size": 54145699, 2025-08-14T21:25:20.6997950Z "digest": "sha256:9fb73296da6ac15f37f36663bd10afc98abb8a01fb40bff4848de7247d28e018" 2025-08-14T21:25:20.6998276Z }, 2025-08-14T21:25:20.6998410Z { 2025-08-14T21:25:20.6998631Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-08-14T21:25:20.6998910Z "size": 32, 2025-08-14T21:25:20.6999178Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-08-14T21:25:20.6999489Z } 2025-08-14T21:25:20.6999633Z ] 2025-08-14T21:25:20.6999775Z } 2025-08-14T21:25:20.6999947Z + exit 0 2025-08-14T21:25:20.7022015Z ##[group]Run set -eux 2025-08-14T21:25:20.7022259Z set -eux 2025-08-14T21:25:20.7022820Z aws secretsmanager get-secret-value --secret-id docker_hub_readonly_token | jq --raw-output '.SecretString' | jq -r .docker_hub_readonly_token | docker login --username pytorchbot --password-stdin 2025-08-14T21:25:20.7031023Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:25:20.7031310Z env: 2025-08-14T21:25:20.7031477Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:25:20.7031671Z ##[endgroup] 2025-08-14T21:25:20.7058834Z + aws secretsmanager get-secret-value --secret-id docker_hub_readonly_token 2025-08-14T21:25:20.7059207Z + jq --raw-output .SecretString 2025-08-14T21:25:20.7060853Z + jq -r .docker_hub_readonly_token 2025-08-14T21:25:20.7061790Z + docker login --username pytorchbot --password-stdin 2025-08-14T21:25:21.2265645Z WARNING! Your password will be stored unencrypted in /home/ec2-user/.docker/config.json. 2025-08-14T21:25:21.2266134Z Configure a credential helper to remove this warning. See 2025-08-14T21:25:21.2266405Z Login Succeeded 2025-08-14T21:25:21.2266743Z https://docs.docker.com/engine/reference/commandline/login/#credentials-store 2025-08-14T21:25:21.2267004Z 2025-08-14T21:25:21.2339355Z ##[group]Run tag=${ECR_DOCKER_IMAGE##*:} 2025-08-14T21:25:21.2339646Z tag=${ECR_DOCKER_IMAGE##*:} 2025-08-14T21:25:21.2340114Z echo "docker pull ghcr.io/pytorch/ci-image:${tag/:/-}" 2025-08-14T21:25:21.2345232Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:25:21.2345481Z env: 2025-08-14T21:25:21.2345635Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:25:21.2346176Z ECR_DOCKER_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T21:25:21.2346717Z ##[endgroup] 2025-08-14T21:25:21.2374102Z docker pull ghcr.io/pytorch/ci-image:pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T21:25:21.2405802Z ##[group]Run pytorch/test-infra/.github/actions/pull-docker-image@main 2025-08-14T21:25:21.2406109Z with: 2025-08-14T21:25:21.2406642Z docker-image: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T21:25:21.2407402Z docker-registry: 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-08-14T21:25:21.2407667Z env: 2025-08-14T21:25:21.2407833Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:25:21.2408017Z ##[endgroup] 2025-08-14T21:25:21.2436367Z ##[group]Run set -x 2025-08-14T21:25:21.2436620Z set -x 2025-08-14T21:25:21.2436815Z set +e 2025-08-14T21:25:21.2436996Z  2025-08-14T21:25:21.2437166Z login() { 2025-08-14T21:25:21.2437537Z  aws ecr get-login-password --region us-east-1 | docker login -u AWS --password-stdin "$1" 2025-08-14T21:25:21.2437929Z } 2025-08-14T21:25:21.2438097Z  2025-08-14T21:25:21.2438321Z retry () { 2025-08-14T21:25:21.2438538Z  $* || (sleep 1 && $*) || (sleep 2 && $*) 2025-08-14T21:25:21.2438760Z } 2025-08-14T21:25:21.2438916Z  2025-08-14T21:25:21.2439087Z retry login "${DOCKER_REGISTRY}" 2025-08-14T21:25:21.2439292Z  2025-08-14T21:25:21.2439623Z IMAGE_SIZE=$(docker manifest inspect "${DOCKER_IMAGE}" | jq '[.layers[].size, .config.size] | add / 1024 / 1024') 2025-08-14T21:25:21.2440054Z echo "Compressed size of image in MB: ${IMAGE_SIZE}" 2025-08-14T21:25:21.2440299Z  2025-08-14T21:25:21.2440453Z set -e 2025-08-14T21:25:21.2440689Z # ignore output since only exit code is used for conditional 2025-08-14T21:25:21.2441032Z # only pull docker image if it's not available locally 2025-08-14T21:25:21.2441376Z if ! docker inspect --type=image "${DOCKER_IMAGE}" >/dev/null 2>/dev/null; then 2025-08-14T21:25:21.2441717Z  retry docker pull "${DOCKER_IMAGE}" 2025-08-14T21:25:21.2442317Z fi 2025-08-14T21:25:21.2447115Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:25:21.2447378Z env: 2025-08-14T21:25:21.2447550Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:25:21.2448125Z DOCKER_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T21:25:21.2448772Z DOCKER_REGISTRY: 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-08-14T21:25:21.2449045Z ##[endgroup] 2025-08-14T21:25:21.2475967Z + set +e 2025-08-14T21:25:21.2481569Z + retry login 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-08-14T21:25:21.2485743Z + login 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-08-14T21:25:21.2487301Z + aws ecr get-login-password --region us-east-1 2025-08-14T21:25:21.2493213Z + docker login -u AWS --password-stdin 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-08-14T21:25:21.7137865Z WARNING! Your password will be stored unencrypted in /home/ec2-user/.docker/config.json. 2025-08-14T21:25:21.7138312Z Configure a credential helper to remove this warning. See 2025-08-14T21:25:21.7139164Z https://docs.docker.com/engine/reference/commandline/login/#credentials-store 2025-08-14T21:25:21.7139425Z 2025-08-14T21:25:21.7139504Z Login Succeeded 2025-08-14T21:25:21.7164886Z ++ docker manifest inspect 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T21:25:21.7165751Z ++ jq '[.layers[].size, .config.size] | add / 1024 / 1024' 2025-08-14T21:25:21.9471799Z + IMAGE_SIZE=27663.483686447144 2025-08-14T21:25:21.9472148Z + echo 'Compressed size of image in MB: 27663.483686447144' 2025-08-14T21:25:21.9472423Z + set -e 2025-08-14T21:25:21.9473493Z + docker inspect --type=image 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T21:25:21.9474151Z Compressed size of image in MB: 27663.483686447144 2025-08-14T21:25:21.9629217Z + retry docker pull 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T21:25:21.9630243Z + docker pull 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T21:25:22.2433593Z pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe: Pulling from pytorch/ci-image 2025-08-14T21:25:22.2434234Z 660ffc76f83b: Pulling fs layer 2025-08-14T21:25:22.2434459Z c7b4a852a455: Pulling fs layer 2025-08-14T21:25:22.2434668Z e5a28988c893: Pulling fs layer 2025-08-14T21:25:22.2434866Z 76a69b57b683: Pulling fs layer 2025-08-14T21:25:22.2435064Z 5c785dcb4cdb: Pulling fs layer 2025-08-14T21:25:22.2435267Z 836ab08052e8: Pulling fs layer 2025-08-14T21:25:22.2435505Z 53b11c77468c: Pulling fs layer 2025-08-14T21:25:22.2435702Z e97311a6a967: Pulling fs layer 2025-08-14T21:25:22.2435899Z 2c414689d31d: Pulling fs layer 2025-08-14T21:25:22.2436101Z 6d89b5f065d5: Pulling fs layer 2025-08-14T21:25:22.2436299Z 5a5cc76ada43: Pulling fs layer 2025-08-14T21:25:22.2436527Z fc6b37d40530: Pulling fs layer 2025-08-14T21:25:22.2436729Z 2e1657907860: Pulling fs layer 2025-08-14T21:25:22.2436919Z 7b92d7a4b8c7: Pulling fs layer 2025-08-14T21:25:22.2437108Z 4f4fb700ef54: Pulling fs layer 2025-08-14T21:25:22.2437303Z d6226eb61f82: Pulling fs layer 2025-08-14T21:25:22.2437493Z 83c70f4266a6: Pulling fs layer 2025-08-14T21:25:22.2437679Z 60c725d21861: Pulling fs layer 2025-08-14T21:25:22.2437869Z 53b11c77468c: Waiting 2025-08-14T21:25:22.2438050Z a504e76e66a4: Pulling fs layer 2025-08-14T21:25:22.2438241Z fc1c200a4f77: Pulling fs layer 2025-08-14T21:25:22.2438427Z e97311a6a967: Waiting 2025-08-14T21:25:22.2438601Z 43273c22704f: Pulling fs layer 2025-08-14T21:25:22.2438786Z 89df389d042a: Pulling fs layer 2025-08-14T21:25:22.2438975Z 684349f50d94: Pulling fs layer 2025-08-14T21:25:22.2439161Z 2c414689d31d: Waiting 2025-08-14T21:25:22.2439331Z 21d0eae87fb3: Pulling fs layer 2025-08-14T21:25:22.2439525Z c9c2b424b8e0: Pulling fs layer 2025-08-14T21:25:22.2439719Z 98dda28f3395: Pulling fs layer 2025-08-14T21:25:22.2439916Z acf5babd87f2: Pulling fs layer 2025-08-14T21:25:22.2440126Z 7c5050d8408d: Pulling fs layer 2025-08-14T21:25:22.2440321Z 7ddd14e2b548: Pulling fs layer 2025-08-14T21:25:22.2440518Z 4ba8e7a736c8: Pulling fs layer 2025-08-14T21:25:22.2440720Z 907c320fee2f: Pulling fs layer 2025-08-14T21:25:22.2440922Z 18c4ed1ec491: Pulling fs layer 2025-08-14T21:25:22.2441131Z d7618c2df6cd: Pulling fs layer 2025-08-14T21:25:22.2441330Z b7bdd9a6f789: Pulling fs layer 2025-08-14T21:25:22.2441534Z 6738ba83282e: Pulling fs layer 2025-08-14T21:25:22.2441743Z dfb0f2488639: Pulling fs layer 2025-08-14T21:25:22.2442064Z dc833b0762f2: Pulling fs layer 2025-08-14T21:25:22.2442270Z 8827df8ca2da: Pulling fs layer 2025-08-14T21:25:22.2442471Z fac8f3bd0f85: Pulling fs layer 2025-08-14T21:25:22.2442660Z d7cf7f140df3: Pulling fs layer 2025-08-14T21:25:22.2442859Z 733eedc8da8d: Pulling fs layer 2025-08-14T21:25:22.2443060Z 5b092eb06909: Pulling fs layer 2025-08-14T21:25:22.2443238Z 6d89b5f065d5: Waiting 2025-08-14T21:25:22.2443416Z bc5961031092: Pulling fs layer 2025-08-14T21:25:22.2443614Z 5a5cc76ada43: Waiting 2025-08-14T21:25:22.2443790Z fc6b37d40530: Waiting 2025-08-14T21:25:22.2443965Z 0531cc34c12a: Pulling fs layer 2025-08-14T21:25:22.2444157Z 2e1657907860: Waiting 2025-08-14T21:25:22.2444338Z 38c303d3b62e: Pulling fs layer 2025-08-14T21:25:22.2444526Z e06d15594a2a: Pulling fs layer 2025-08-14T21:25:22.2444714Z 7b92d7a4b8c7: Waiting 2025-08-14T21:25:22.2444893Z 0e55deb5cb38: Pulling fs layer 2025-08-14T21:25:22.2445076Z 4f4fb700ef54: Waiting 2025-08-14T21:25:22.2445250Z d6226eb61f82: Waiting 2025-08-14T21:25:22.2445432Z 4a53d66dce07: Pulling fs layer 2025-08-14T21:25:22.2445921Z 1519daa051b8: Pulling fs layer 2025-08-14T21:25:22.2446125Z 381ed91d2119: Pulling fs layer 2025-08-14T21:25:22.2446329Z c6b0a01a96dd: Pulling fs layer 2025-08-14T21:25:22.2446525Z 62df6413daee: Pulling fs layer 2025-08-14T21:25:22.2446727Z 7a18bc2a6881: Pulling fs layer 2025-08-14T21:25:22.2446924Z 93359cd58a8c: Pulling fs layer 2025-08-14T21:25:22.2447114Z c35ba0a1f353: Pulling fs layer 2025-08-14T21:25:22.2447410Z dcf1e01c98d6: Pulling fs layer 2025-08-14T21:25:22.2447611Z bad0564f61fd: Pulling fs layer 2025-08-14T21:25:22.2447804Z 83c70f4266a6: Waiting 2025-08-14T21:25:22.2447969Z 76a69b57b683: Waiting 2025-08-14T21:25:22.2448144Z 539ded905736: Pulling fs layer 2025-08-14T21:25:22.2448333Z 60c725d21861: Waiting 2025-08-14T21:25:22.2448504Z 28d482062637: Pulling fs layer 2025-08-14T21:25:22.2448691Z a504e76e66a4: Waiting 2025-08-14T21:25:22.2448870Z 3245316ff51b: Pulling fs layer 2025-08-14T21:25:22.2449052Z 5c785dcb4cdb: Waiting 2025-08-14T21:25:22.2449230Z fc1c200a4f77: Waiting 2025-08-14T21:25:22.2449400Z 836ab08052e8: Waiting 2025-08-14T21:25:22.2449566Z 43273c22704f: Waiting 2025-08-14T21:25:22.2449743Z b53167d1a6df: Pulling fs layer 2025-08-14T21:25:22.2449933Z 89df389d042a: Waiting 2025-08-14T21:25:22.2450104Z 7f5277f69167: Pulling fs layer 2025-08-14T21:25:22.2450294Z 684349f50d94: Waiting 2025-08-14T21:25:22.2450464Z 0531cc34c12a: Waiting 2025-08-14T21:25:22.2450634Z 23dff10cdaa5: Pulling fs layer 2025-08-14T21:25:22.2450834Z 21d0eae87fb3: Waiting 2025-08-14T21:25:22.2451017Z 9fb73296da6a: Pulling fs layer 2025-08-14T21:25:22.2451197Z c9c2b424b8e0: Waiting 2025-08-14T21:25:22.2451368Z 38c303d3b62e: Waiting 2025-08-14T21:25:22.2451536Z dc833b0762f2: Waiting 2025-08-14T21:25:22.2451707Z d7cf7f140df3: Waiting 2025-08-14T21:25:22.2451871Z 733eedc8da8d: Waiting 2025-08-14T21:25:22.2452039Z 4a53d66dce07: Waiting 2025-08-14T21:25:22.2452207Z 5b092eb06909: Waiting 2025-08-14T21:25:22.2452364Z bc5961031092: Waiting 2025-08-14T21:25:22.2452529Z 8827df8ca2da: Waiting 2025-08-14T21:25:22.2452698Z 1519daa051b8: Waiting 2025-08-14T21:25:22.2452860Z 98dda28f3395: Waiting 2025-08-14T21:25:22.2453030Z bad0564f61fd: Waiting 2025-08-14T21:25:22.2453199Z 381ed91d2119: Waiting 2025-08-14T21:25:22.2453360Z e06d15594a2a: Waiting 2025-08-14T21:25:22.2453534Z acf5babd87f2: Waiting 2025-08-14T21:25:22.2453705Z 539ded905736: Waiting 2025-08-14T21:25:22.2453865Z 7c5050d8408d: Waiting 2025-08-14T21:25:22.2454042Z c6b0a01a96dd: Waiting 2025-08-14T21:25:22.2454212Z 28d482062637: Waiting 2025-08-14T21:25:22.2454373Z 62df6413daee: Waiting 2025-08-14T21:25:22.2454543Z 7ddd14e2b548: Waiting 2025-08-14T21:25:22.2454717Z 23dff10cdaa5: Waiting 2025-08-14T21:25:22.2454880Z 0e55deb5cb38: Waiting 2025-08-14T21:25:22.2455047Z 7f5277f69167: Waiting 2025-08-14T21:25:22.2455216Z 4ba8e7a736c8: Waiting 2025-08-14T21:25:22.2455378Z 9fb73296da6a: Waiting 2025-08-14T21:25:22.2455548Z 7a18bc2a6881: Waiting 2025-08-14T21:25:22.2455719Z fac8f3bd0f85: Waiting 2025-08-14T21:25:22.2455881Z dfb0f2488639: Waiting 2025-08-14T21:25:22.2456049Z b53167d1a6df: Waiting 2025-08-14T21:25:22.2456218Z dcf1e01c98d6: Waiting 2025-08-14T21:25:22.2456385Z d7618c2df6cd: Waiting 2025-08-14T21:25:22.2456547Z b7bdd9a6f789: Waiting 2025-08-14T21:25:22.2456717Z 18c4ed1ec491: Waiting 2025-08-14T21:25:22.2456884Z 6738ba83282e: Waiting 2025-08-14T21:25:22.2457045Z c35ba0a1f353: Waiting 2025-08-14T21:25:22.2457213Z 3245316ff51b: Waiting 2025-08-14T21:25:22.2457386Z 93359cd58a8c: Waiting 2025-08-14T21:25:22.2457564Z 907c320fee2f: Waiting 2025-08-14T21:25:22.3296557Z c7b4a852a455: Verifying Checksum 2025-08-14T21:25:22.3296872Z c7b4a852a455: Download complete 2025-08-14T21:25:22.4295340Z 76a69b57b683: Verifying Checksum 2025-08-14T21:25:22.4295663Z 76a69b57b683: Download complete 2025-08-14T21:25:22.5097410Z 5c785dcb4cdb: Download complete 2025-08-14T21:25:22.5929050Z 836ab08052e8: Download complete 2025-08-14T21:25:22.5968414Z 660ffc76f83b: Download complete 2025-08-14T21:25:22.6624851Z 53b11c77468c: Download complete 2025-08-14T21:25:22.6844177Z e97311a6a967: Download complete 2025-08-14T21:25:22.7750663Z 6d89b5f065d5: Download complete 2025-08-14T21:25:22.8663343Z 5a5cc76ada43: Download complete 2025-08-14T21:25:22.9424076Z fc6b37d40530: Verifying Checksum 2025-08-14T21:25:22.9424380Z fc6b37d40530: Download complete 2025-08-14T21:25:23.0474649Z 2e1657907860: Verifying Checksum 2025-08-14T21:25:23.0474953Z 2e1657907860: Download complete 2025-08-14T21:25:23.7435594Z 660ffc76f83b: Pull complete 2025-08-14T21:25:23.7600651Z c7b4a852a455: Pull complete 2025-08-14T21:25:23.8226286Z 2c414689d31d: Verifying Checksum 2025-08-14T21:25:23.8226613Z 2c414689d31d: Download complete 2025-08-14T21:25:23.8368100Z 4f4fb700ef54: Download complete 2025-08-14T21:25:23.9196377Z d6226eb61f82: Download complete 2025-08-14T21:25:24.0090019Z 83c70f4266a6: Verifying Checksum 2025-08-14T21:25:24.0090331Z 83c70f4266a6: Download complete 2025-08-14T21:25:24.0872494Z 60c725d21861: Download complete 2025-08-14T21:25:24.2027497Z a504e76e66a4: Verifying Checksum 2025-08-14T21:25:24.2029807Z a504e76e66a4: Download complete 2025-08-14T21:25:24.2673469Z fc1c200a4f77: Download complete 2025-08-14T21:25:24.3430902Z 43273c22704f: Verifying Checksum 2025-08-14T21:25:24.3431219Z 43273c22704f: Download complete 2025-08-14T21:25:24.4305879Z 89df389d042a: Verifying Checksum 2025-08-14T21:25:24.4306210Z 89df389d042a: Download complete 2025-08-14T21:25:24.5259649Z 684349f50d94: Download complete 2025-08-14T21:25:24.6280530Z 21d0eae87fb3: Verifying Checksum 2025-08-14T21:25:24.6282700Z 21d0eae87fb3: Download complete 2025-08-14T21:25:24.7036244Z c9c2b424b8e0: Verifying Checksum 2025-08-14T21:25:24.7036760Z c9c2b424b8e0: Download complete 2025-08-14T21:25:24.8105569Z 98dda28f3395: Verifying Checksum 2025-08-14T21:25:24.8107448Z 98dda28f3395: Download complete 2025-08-14T21:25:24.8837273Z acf5babd87f2: Download complete 2025-08-14T21:25:25.4478445Z e5a28988c893: Verifying Checksum 2025-08-14T21:25:25.4478763Z e5a28988c893: Download complete 2025-08-14T21:25:25.5450712Z 7ddd14e2b548: Verifying Checksum 2025-08-14T21:25:25.5451278Z 7ddd14e2b548: Download complete 2025-08-14T21:25:25.6562737Z 4ba8e7a736c8: Verifying Checksum 2025-08-14T21:25:25.6563073Z 4ba8e7a736c8: Download complete 2025-08-14T21:25:25.7389565Z 907c320fee2f: Verifying Checksum 2025-08-14T21:25:25.7389894Z 907c320fee2f: Download complete 2025-08-14T21:25:25.8116610Z 18c4ed1ec491: Download complete 2025-08-14T21:25:26.0591947Z d7618c2df6cd: Verifying Checksum 2025-08-14T21:25:26.0593792Z d7618c2df6cd: Download complete 2025-08-14T21:25:26.1533018Z b7bdd9a6f789: Download complete 2025-08-14T21:25:26.2374475Z 6738ba83282e: Verifying Checksum 2025-08-14T21:25:26.2374806Z 6738ba83282e: Download complete 2025-08-14T21:25:26.3096759Z dfb0f2488639: Verifying Checksum 2025-08-14T21:25:26.3097070Z dfb0f2488639: Download complete 2025-08-14T21:25:26.3774723Z dc833b0762f2: Verifying Checksum 2025-08-14T21:25:26.3774989Z dc833b0762f2: Download complete 2025-08-14T21:25:26.4858323Z 8827df8ca2da: Download complete 2025-08-14T21:25:29.4904405Z 7c5050d8408d: Verifying Checksum 2025-08-14T21:25:29.4904755Z 7c5050d8408d: Download complete 2025-08-14T21:25:29.5648863Z d7cf7f140df3: Verifying Checksum 2025-08-14T21:25:29.5649199Z d7cf7f140df3: Download complete 2025-08-14T21:25:32.3502227Z 733eedc8da8d: Verifying Checksum 2025-08-14T21:25:32.3502545Z 733eedc8da8d: Download complete 2025-08-14T21:25:36.3493550Z e5a28988c893: Pull complete 2025-08-14T21:25:36.7220256Z 76a69b57b683: Pull complete 2025-08-14T21:25:37.0095562Z 5c785dcb4cdb: Pull complete 2025-08-14T21:25:37.2672482Z 836ab08052e8: Pull complete 2025-08-14T21:25:37.5473662Z 53b11c77468c: Pull complete 2025-08-14T21:25:37.8210243Z e97311a6a967: Pull complete 2025-08-14T21:25:41.2681414Z 2c414689d31d: Pull complete 2025-08-14T21:25:41.5789108Z 6d89b5f065d5: Pull complete 2025-08-14T21:25:41.8474513Z 5a5cc76ada43: Pull complete 2025-08-14T21:25:42.1219136Z fc6b37d40530: Pull complete 2025-08-14T21:25:42.3685923Z 2e1657907860: Pull complete 2025-08-14T21:25:57.0922894Z 7b92d7a4b8c7: Verifying Checksum 2025-08-14T21:25:57.0923385Z 7b92d7a4b8c7: Download complete 2025-08-14T21:25:57.1918486Z bc5961031092: Verifying Checksum 2025-08-14T21:25:57.1920781Z bc5961031092: Download complete 2025-08-14T21:25:57.2732059Z 0531cc34c12a: Verifying Checksum 2025-08-14T21:25:57.2733862Z 0531cc34c12a: Download complete 2025-08-14T21:25:57.3420933Z 38c303d3b62e: Verifying Checksum 2025-08-14T21:25:57.3421446Z 38c303d3b62e: Download complete 2025-08-14T21:25:57.4145341Z e06d15594a2a: Verifying Checksum 2025-08-14T21:25:57.4150532Z e06d15594a2a: Download complete 2025-08-14T21:25:57.4779481Z 0e55deb5cb38: Verifying Checksum 2025-08-14T21:25:57.4784755Z 0e55deb5cb38: Download complete 2025-08-14T21:25:57.5702710Z 4a53d66dce07: Verifying Checksum 2025-08-14T21:25:57.5703017Z 4a53d66dce07: Download complete 2025-08-14T21:25:57.6729385Z 1519daa051b8: Verifying Checksum 2025-08-14T21:25:57.6729923Z 1519daa051b8: Download complete 2025-08-14T21:25:57.7552897Z 381ed91d2119: Download complete 2025-08-14T21:25:57.8197871Z c6b0a01a96dd: Verifying Checksum 2025-08-14T21:25:57.8203987Z c6b0a01a96dd: Download complete 2025-08-14T21:25:57.9190987Z 62df6413daee: Verifying Checksum 2025-08-14T21:25:57.9928272Z 62df6413daee: Download complete 2025-08-14T21:25:57.9928607Z 7a18bc2a6881: Verifying Checksum 2025-08-14T21:25:57.9929031Z 7a18bc2a6881: Download complete 2025-08-14T21:25:58.0551526Z 93359cd58a8c: Verifying Checksum 2025-08-14T21:25:58.0551832Z 93359cd58a8c: Download complete 2025-08-14T21:25:58.1161535Z c35ba0a1f353: Verifying Checksum 2025-08-14T21:25:58.1162019Z c35ba0a1f353: Download complete 2025-08-14T21:25:58.2139320Z dcf1e01c98d6: Verifying Checksum 2025-08-14T21:25:58.2139635Z dcf1e01c98d6: Download complete 2025-08-14T21:26:00.7279594Z bad0564f61fd: Verifying Checksum 2025-08-14T21:26:00.7279913Z bad0564f61fd: Download complete 2025-08-14T21:26:00.8309111Z 539ded905736: Download complete 2025-08-14T21:26:00.9392406Z 28d482062637: Verifying Checksum 2025-08-14T21:26:00.9392699Z 28d482062637: Download complete 2025-08-14T21:26:01.0160402Z 3245316ff51b: Download complete 2025-08-14T21:26:01.1049047Z b53167d1a6df: Verifying Checksum 2025-08-14T21:26:01.1049366Z b53167d1a6df: Download complete 2025-08-14T21:26:01.1920394Z 7f5277f69167: Verifying Checksum 2025-08-14T21:26:01.1920731Z 7f5277f69167: Download complete 2025-08-14T21:26:01.2641571Z 23dff10cdaa5: Download complete 2025-08-14T21:26:01.8614627Z 9fb73296da6a: Verifying Checksum 2025-08-14T21:26:01.8615744Z 9fb73296da6a: Download complete 2025-08-14T21:26:36.7578143Z 5b092eb06909: Verifying Checksum 2025-08-14T21:26:36.7578451Z 5b092eb06909: Download complete 2025-08-14T21:27:11.3279293Z 7b92d7a4b8c7: Pull complete 2025-08-14T21:27:11.5549010Z 4f4fb700ef54: Pull complete 2025-08-14T21:27:12.0532660Z d6226eb61f82: Pull complete 2025-08-14T21:27:12.4676515Z 83c70f4266a6: Pull complete 2025-08-14T21:27:12.7712242Z 60c725d21861: Pull complete 2025-08-14T21:27:13.2397487Z a504e76e66a4: Pull complete 2025-08-14T21:27:13.4995276Z fc1c200a4f77: Pull complete 2025-08-14T21:27:13.9441553Z 43273c22704f: Pull complete 2025-08-14T21:27:14.3451745Z 89df389d042a: Pull complete 2025-08-14T21:27:14.6399126Z 684349f50d94: Pull complete 2025-08-14T21:27:15.0002603Z 21d0eae87fb3: Pull complete 2025-08-14T21:27:15.3404135Z c9c2b424b8e0: Pull complete 2025-08-14T21:27:15.7397370Z 98dda28f3395: Pull complete 2025-08-14T21:27:16.0196803Z acf5babd87f2: Pull complete 2025-08-14T21:27:27.1055410Z 7c5050d8408d: Pull complete 2025-08-14T21:27:27.5952753Z 7ddd14e2b548: Pull complete 2025-08-14T21:27:28.0474306Z 4ba8e7a736c8: Pull complete 2025-08-14T21:27:28.5612246Z 907c320fee2f: Pull complete 2025-08-14T21:27:28.9146953Z 18c4ed1ec491: Pull complete 2025-08-14T21:27:29.6791860Z d7618c2df6cd: Pull complete 2025-08-14T21:27:30.1280842Z b7bdd9a6f789: Pull complete 2025-08-14T21:27:30.5580985Z 6738ba83282e: Pull complete 2025-08-14T21:27:31.3200561Z dfb0f2488639: Pull complete 2025-08-14T21:27:31.6804981Z dc833b0762f2: Pull complete 2025-08-14T21:27:32.2010099Z 8827df8ca2da: Pull complete 2025-08-14T21:28:23.2721998Z fac8f3bd0f85: Verifying Checksum 2025-08-14T21:28:23.2726064Z fac8f3bd0f85: Download complete 2025-08-14T21:32:20.6607514Z fac8f3bd0f85: Pull complete 2025-08-14T21:32:21.0666809Z d7cf7f140df3: Pull complete 2025-08-14T21:32:23.5147375Z 733eedc8da8d: Pull complete 2025-08-14T21:34:48.7802189Z 5b092eb06909: Pull complete 2025-08-14T21:34:48.8060876Z bc5961031092: Pull complete 2025-08-14T21:34:48.8350062Z 0531cc34c12a: Pull complete 2025-08-14T21:34:48.8928163Z 38c303d3b62e: Pull complete 2025-08-14T21:34:48.9469019Z e06d15594a2a: Pull complete 2025-08-14T21:34:48.9721949Z 0e55deb5cb38: Pull complete 2025-08-14T21:34:49.0201815Z 4a53d66dce07: Pull complete 2025-08-14T21:34:49.0705552Z 1519daa051b8: Pull complete 2025-08-14T21:34:49.0956847Z 381ed91d2119: Pull complete 2025-08-14T21:34:49.1467943Z c6b0a01a96dd: Pull complete 2025-08-14T21:34:49.1741302Z 62df6413daee: Pull complete 2025-08-14T21:34:49.2279589Z 7a18bc2a6881: Pull complete 2025-08-14T21:34:49.2538483Z 93359cd58a8c: Pull complete 2025-08-14T21:34:49.3029907Z c35ba0a1f353: Pull complete 2025-08-14T21:34:49.3278150Z dcf1e01c98d6: Pull complete 2025-08-14T21:34:57.9101575Z bad0564f61fd: Pull complete 2025-08-14T21:34:58.0999661Z 539ded905736: Pull complete 2025-08-14T21:34:58.2460033Z 28d482062637: Pull complete 2025-08-14T21:34:58.4502759Z 3245316ff51b: Pull complete 2025-08-14T21:34:58.6998906Z b53167d1a6df: Pull complete 2025-08-14T21:34:58.9650014Z 7f5277f69167: Pull complete 2025-08-14T21:34:59.7073273Z 23dff10cdaa5: Pull complete 2025-08-14T21:35:02.4799344Z 9fb73296da6a: Pull complete 2025-08-14T21:35:03.1204867Z Digest: sha256:4236794baba289041d240d08fd393bbd57497c3012e5e0ccd9fd98f61ebf35c6 2025-08-14T21:35:03.1942508Z Status: Downloaded newer image for 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T21:35:03.2217457Z 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T21:35:03.2303104Z ##[group]Run echo "IN_CONTAINER_RUNNER=$(if [ -f /.inarc ] || [ -f /.incontainer ]; then echo true ; else echo false; fi)" >> "$GITHUB_OUTPUT" 2025-08-14T21:35:03.2303753Z echo "IN_CONTAINER_RUNNER=$(if [ -f /.inarc ] || [ -f /.incontainer ]; then echo true ; else echo false; fi)" >> "$GITHUB_OUTPUT" 2025-08-14T21:35:03.2311331Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:35:03.2311587Z env: 2025-08-14T21:35:03.2311750Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:35:03.2311930Z ##[endgroup] 2025-08-14T21:35:03.2391217Z Prepare all required actions 2025-08-14T21:35:03.2416458Z ##[group]Run ./.github/actions/get-workflow-job-id 2025-08-14T21:35:03.2416732Z with: 2025-08-14T21:35:03.2417328Z github-token: *** 2025-08-14T21:35:03.2417499Z env: 2025-08-14T21:35:03.2417670Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:35:03.2417866Z ##[endgroup] 2025-08-14T21:35:03.2579480Z ##[group]Run set -eux 2025-08-14T21:35:03.2579860Z set -eux 2025-08-14T21:35:03.2580188Z python3 .github/scripts/get_workflow_job_id.py "${GITHUB_RUN_ID}" "${RUNNER_NAME}" 2025-08-14T21:35:03.2586220Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:35:03.2586478Z env: 2025-08-14T21:35:03.2586649Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:35:03.2587134Z GITHUB_TOKEN: *** 2025-08-14T21:35:03.2587306Z ##[endgroup] 2025-08-14T21:35:03.2614928Z + python3 .github/scripts/get_workflow_job_id.py 16976338999 i-06c8ea4ed8741f176 2025-08-14T21:35:04.8588383Z Setting output job-id=48128261046 2025-08-14T21:35:04.8589187Z Setting output job-name=linux-jammy-cpu-py3.9-gcc11-inductor / test (cpu_inductor_freezing_huggingface, 1, 1, linux.8xlarge.amx) 2025-08-14T21:35:04.8735542Z ##[group]Run python3 -m pip install psutil==5.9.8 dataclasses_json==0.6.7 nvidia-ml-py==11.525.84 2025-08-14T21:35:04.8736015Z python3 -m pip install psutil==5.9.8 dataclasses_json==0.6.7 nvidia-ml-py==11.525.84 2025-08-14T21:35:04.8736583Z python3 -m tools.stats.monitor --log-interval "$MONITOR_LOG_INTERVAL" --data-collect-interval "$MONITOR_DATA_COLLECT_INTERVAL" > usage_log.txt 2>&1 & 2025-08-14T21:35:04.8737108Z echo "monitor-script-pid=${!}" >> "${GITHUB_OUTPUT}" 2025-08-14T21:35:04.8743546Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:35:04.8743803Z env: 2025-08-14T21:35:04.8743968Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:35:04.8744293Z JOB_ID: 48128261046 2025-08-14T21:35:04.8744648Z JOB_NAME: linux-jammy-cpu-py3.9-gcc11-inductor / test (cpu_inductor_freezing_huggingface, 1, 1, linux.8xlarge.amx) 2025-08-14T21:35:04.8745042Z WORKFLOW_NAME: inductor-periodic 2025-08-14T21:35:04.8745276Z WORKFLOW_RUN_ID: 16976338999 2025-08-14T21:35:04.8745473Z MONITOR_LOG_INTERVAL: 5 2025-08-14T21:35:04.8745663Z MONITOR_DATA_COLLECT_INTERVAL: 1 2025-08-14T21:35:04.8745857Z ##[endgroup] 2025-08-14T21:35:05.3948149Z Defaulting to user installation because normal site-packages is not writeable 2025-08-14T21:35:05.6991391Z Collecting psutil==5.9.8 2025-08-14T21:35:05.7152113Z Downloading psutil-5.9.8-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (288 kB) 2025-08-14T21:35:05.8975566Z Collecting dataclasses_json==0.6.7 2025-08-14T21:35:05.9010398Z Downloading dataclasses_json-0.6.7-py3-none-any.whl (28 kB) 2025-08-14T21:35:05.9461100Z Collecting nvidia-ml-py==11.525.84 2025-08-14T21:35:05.9498997Z Downloading nvidia_ml_py-11.525.84-py3-none-any.whl (34 kB) 2025-08-14T21:35:06.0161079Z Collecting typing-inspect<1,>=0.4.0 2025-08-14T21:35:06.0193006Z Downloading typing_inspect-0.9.0-py3-none-any.whl (8.8 kB) 2025-08-14T21:35:06.1713602Z Collecting marshmallow<4.0.0,>=3.18.0 2025-08-14T21:35:06.1750765Z Downloading marshmallow-3.26.1-py3-none-any.whl (50 kB) 2025-08-14T21:35:06.3333754Z Collecting packaging>=17.0 2025-08-14T21:35:06.3369225Z Downloading packaging-25.0-py3-none-any.whl (66 kB) 2025-08-14T21:35:06.5012297Z Collecting typing-extensions>=3.7.4 2025-08-14T21:35:06.5046160Z Downloading typing_extensions-4.14.1-py3-none-any.whl (43 kB) 2025-08-14T21:35:06.6294307Z Collecting mypy-extensions>=0.3.0 2025-08-14T21:35:06.6330854Z Downloading mypy_extensions-1.1.0-py3-none-any.whl (5.0 kB) 2025-08-14T21:35:06.9261024Z Installing collected packages: typing-extensions, packaging, mypy-extensions, typing-inspect, marshmallow, psutil, nvidia-ml-py, dataclasses-json 2025-08-14T21:35:07.7076455Z Successfully installed dataclasses-json-0.6.7 marshmallow-3.26.1 mypy-extensions-1.1.0 nvidia-ml-py-11.525.84 packaging-25.0 psutil-5.9.8 typing-extensions-4.14.1 typing-inspect-0.9.0 2025-08-14T21:35:07.9874230Z Prepare all required actions 2025-08-14T21:35:07.9874566Z Getting action download info 2025-08-14T21:35:08.1325894Z Download action repository 'seemethere/download-artifact-s3@v4' (SHA:1da556a7aa0a088e3153970611f6c432d58e80e6) 2025-08-14T21:35:08.8180373Z Download action repository 'actions/download-artifact@v4' (SHA:d3f86a106a0bac45b974a628896c90dbdf5c8093) 2025-08-14T21:35:11.7114310Z ##[group]Run ./.github/actions/download-build-artifacts 2025-08-14T21:35:11.7114631Z with: 2025-08-14T21:35:11.7114868Z name: linux-jammy-py3.9-gcc11-build 2025-08-14T21:35:11.7115130Z s3-bucket: gha-artifacts 2025-08-14T21:35:11.7115356Z env: 2025-08-14T21:35:11.7115547Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:35:11.7115758Z ##[endgroup] 2025-08-14T21:35:11.7144381Z ##[group]Run seemethere/download-artifact-s3@v4 2025-08-14T21:35:11.7144700Z with: 2025-08-14T21:35:11.7144902Z name: linux-jammy-py3.9-gcc11-build 2025-08-14T21:35:11.7145137Z s3-bucket: gha-artifacts 2025-08-14T21:35:11.7145376Z region: us-east-1 2025-08-14T21:35:11.7145552Z env: 2025-08-14T21:35:11.7145722Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:35:11.7145923Z ##[endgroup] 2025-08-14T21:35:12.4716934Z (node:47849) NOTE: We are formalizing our plans to enter AWS SDK for JavaScript (v2) into maintenance mode in 2023. 2025-08-14T21:35:12.4721357Z 2025-08-14T21:35:12.4723328Z Please migrate your code to use AWS SDK for JavaScript (v3). 2025-08-14T21:35:12.4725718Z For more information, check the migration guide at https://a.co/7PzMCcy 2025-08-14T21:35:12.4726200Z (Use `node --trace-warnings ...` to show where the warning was created) 2025-08-14T21:35:13.8297022Z Found 1 objects with prefix pytorch/pytorch/16976338999/linux-jammy-py3.9-gcc11-build/ 2025-08-14T21:35:13.8297776Z Starting download (1/1): /home/ec2-user/actions-runner/_work/pytorch/pytorch/artifacts.zip 2025-08-14T21:35:18.4366662Z Finished download (1/1): /home/ec2-user/actions-runner/_work/pytorch/pytorch/artifacts.zip 2025-08-14T21:35:18.4372005Z Artifact download has finished successfully 2025-08-14T21:35:18.4586183Z ##[group]Run unzip -o artifacts.zip 2025-08-14T21:35:18.4586428Z unzip -o artifacts.zip 2025-08-14T21:35:18.4591061Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:35:18.4591308Z env: 2025-08-14T21:35:18.4591468Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:35:18.4591658Z ##[endgroup] 2025-08-14T21:35:18.4658341Z Archive: artifacts.zip 2025-08-14T21:35:18.4658605Z creating: dist/ 2025-08-14T21:35:19.5565738Z inflating: dist/torch-2.9.0a0+git1fc683c-cp39-cp39-linux_x86_64.whl 2025-08-14T21:35:19.5569993Z creating: dist/vision/ 2025-08-14T21:35:19.5646925Z inflating: dist/vision/torchvision-0.22.0a0+966da7e-cp39-cp39-linux_x86_64.whl 2025-08-14T21:35:19.5650566Z creating: dist/audio/ 2025-08-14T21:35:19.5751389Z inflating: dist/audio/torchaudio-2.8.0a0+bdb88e1-cp39-cp39-linux_x86_64.whl 2025-08-14T21:35:19.5758567Z creating: dist/ao/ 2025-08-14T21:35:19.5791694Z inflating: dist/ao/torchao-0.7.0+git51c87b6e-py3-none-any.whl 2025-08-14T21:35:19.5912002Z inflating: dist/.ninja_log 2025-08-14T21:35:19.5915424Z creating: build/custom_test_artifacts/ 2025-08-14T21:35:19.5915910Z creating: build/custom_test_artifacts/custom-op-build/ 2025-08-14T21:35:19.5916354Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/ 2025-08-14T21:35:19.5917337Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/pkgRedirects/ 2025-08-14T21:35:19.5917829Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/CMakeConfigureLog.yaml 2025-08-14T21:35:19.5918235Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/4.0.0/ 2025-08-14T21:35:19.5918631Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/4.0.0/CMakeSystem.cmake 2025-08-14T21:35:19.5919081Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/4.0.0/CompilerIdC/ 2025-08-14T21:35:19.5919882Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/4.0.0/CompilerIdC/tmp/ 2025-08-14T21:35:19.5920346Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/4.0.0/CompilerIdC/CMakeCCompilerId.c 2025-08-14T21:35:19.5920816Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/4.0.0/CompilerIdC/a.out 2025-08-14T21:35:19.5921251Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/4.0.0/CMakeCCompiler.cmake 2025-08-14T21:35:19.5921678Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/4.0.0/CompilerIdCXX/ 2025-08-14T21:35:19.5922101Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/4.0.0/CompilerIdCXX/tmp/ 2025-08-14T21:35:19.5922591Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/4.0.0/CompilerIdCXX/CMakeCXXCompilerId.cpp 2025-08-14T21:35:19.5923094Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/4.0.0/CompilerIdCXX/a.out 2025-08-14T21:35:19.5923544Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/4.0.0/CMakeCXXCompiler.cmake 2025-08-14T21:35:19.5924627Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/4.0.0/CMakeDetermineCompilerABI_C.bin 2025-08-14T21:35:19.5925322Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/4.0.0/CMakeDetermineCompilerABI_CXX.bin 2025-08-14T21:35:19.5926029Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/CMakeScratch/ 2025-08-14T21:35:19.5926636Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/cmake.check_cache 2025-08-14T21:35:19.5927652Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/ 2025-08-14T21:35:19.5928151Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/compiler_depend.ts 2025-08-14T21:35:19.5928904Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/compiler_depend.make 2025-08-14T21:35:19.5929397Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/depend.make 2025-08-14T21:35:19.5929842Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/link.txt 2025-08-14T21:35:19.5930300Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/cmake_clean.cmake 2025-08-14T21:35:19.5930759Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/build.make 2025-08-14T21:35:19.5931220Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/DependInfo.cmake 2025-08-14T21:35:19.5931675Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/flags.make 2025-08-14T21:35:19.5932126Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/progress.make 2025-08-14T21:35:19.5951389Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/op.cpp.o.d 2025-08-14T21:35:19.6123759Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/op.cpp.o 2025-08-14T21:35:19.6124351Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/ 2025-08-14T21:35:19.6124828Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/compiler_depend.ts 2025-08-14T21:35:19.6125345Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/compiler_depend.make 2025-08-14T21:35:19.6125844Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/depend.make 2025-08-14T21:35:19.6126306Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/link.txt 2025-08-14T21:35:19.6126782Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/cmake_clean.cmake 2025-08-14T21:35:19.6127341Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/build.make 2025-08-14T21:35:19.6128162Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/DependInfo.cmake 2025-08-14T21:35:19.6128654Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/flags.make 2025-08-14T21:35:19.6129129Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/progress.make 2025-08-14T21:35:19.6145185Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/test_custom_ops.cpp.o.d 2025-08-14T21:35:19.6219767Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/test_custom_ops.cpp.o 2025-08-14T21:35:19.6220646Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/CMakeDirectoryInformation.cmake 2025-08-14T21:35:19.6221171Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/TargetDirectories.txt 2025-08-14T21:35:19.6221669Z extracting: build/custom_test_artifacts/custom-op-build/CMakeFiles/progress.marks 2025-08-14T21:35:19.6222100Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/Makefile2 2025-08-14T21:35:19.6222500Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/Makefile.cmake 2025-08-14T21:35:19.6222900Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/InstallScripts.json 2025-08-14T21:35:19.6223268Z inflating: build/custom_test_artifacts/custom-op-build/CMakeCache.txt 2025-08-14T21:35:19.6223741Z inflating: build/custom_test_artifacts/custom-op-build/Makefile 2025-08-14T21:35:19.6224215Z inflating: build/custom_test_artifacts/custom-op-build/cmake_install.cmake 2025-08-14T21:35:19.6374685Z inflating: build/custom_test_artifacts/custom-op-build/libcustom_ops.so 2025-08-14T21:35:19.6421316Z inflating: build/custom_test_artifacts/custom-op-build/test_custom_ops 2025-08-14T21:35:19.6422304Z creating: build/custom_test_artifacts/jit-hook-build/ 2025-08-14T21:35:19.6422788Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/ 2025-08-14T21:35:19.6423321Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/pkgRedirects/ 2025-08-14T21:35:19.6423791Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/CMakeConfigureLog.yaml 2025-08-14T21:35:19.6424225Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/4.0.0/ 2025-08-14T21:35:19.6424646Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/4.0.0/CMakeSystem.cmake 2025-08-14T21:35:19.6425231Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/4.0.0/CompilerIdC/ 2025-08-14T21:35:19.6425682Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/4.0.0/CompilerIdC/tmp/ 2025-08-14T21:35:19.6428283Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/4.0.0/CompilerIdC/CMakeCCompilerId.c 2025-08-14T21:35:19.6428829Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/4.0.0/CompilerIdC/a.out 2025-08-14T21:35:19.6429311Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/4.0.0/CMakeCCompiler.cmake 2025-08-14T21:35:19.6429754Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/4.0.0/CompilerIdCXX/ 2025-08-14T21:35:19.6430170Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/4.0.0/CompilerIdCXX/tmp/ 2025-08-14T21:35:19.6430887Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/4.0.0/CompilerIdCXX/CMakeCXXCompilerId.cpp 2025-08-14T21:35:19.6433617Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/4.0.0/CompilerIdCXX/a.out 2025-08-14T21:35:19.6434118Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/4.0.0/CMakeCXXCompiler.cmake 2025-08-14T21:35:19.6434610Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/4.0.0/CMakeDetermineCompilerABI_C.bin 2025-08-14T21:35:19.6438737Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/4.0.0/CMakeDetermineCompilerABI_CXX.bin 2025-08-14T21:35:19.6440802Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/CMakeScratch/ 2025-08-14T21:35:19.6441667Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/cmake.check_cache 2025-08-14T21:35:19.6442523Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/ 2025-08-14T21:35:19.6442999Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/compiler_depend.ts 2025-08-14T21:35:19.6443527Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/compiler_depend.make 2025-08-14T21:35:19.6444020Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/depend.make 2025-08-14T21:35:19.6444472Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/link.txt 2025-08-14T21:35:19.6444940Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/cmake_clean.cmake 2025-08-14T21:35:19.6445422Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/build.make 2025-08-14T21:35:19.6445897Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/DependInfo.cmake 2025-08-14T21:35:19.6446355Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/flags.make 2025-08-14T21:35:19.6446817Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/progress.make 2025-08-14T21:35:19.6456096Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/test_jit_hooks.cpp.o.d 2025-08-14T21:35:19.6517885Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/test_jit_hooks.cpp.o 2025-08-14T21:35:19.6519837Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/CMakeDirectoryInformation.cmake 2025-08-14T21:35:19.6520738Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/TargetDirectories.txt 2025-08-14T21:35:19.6524348Z extracting: build/custom_test_artifacts/jit-hook-build/CMakeFiles/progress.marks 2025-08-14T21:35:19.6524877Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/Makefile2 2025-08-14T21:35:19.6529980Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/Makefile.cmake 2025-08-14T21:35:19.6530520Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/InstallScripts.json 2025-08-14T21:35:19.6530910Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeCache.txt 2025-08-14T21:35:19.6531238Z inflating: build/custom_test_artifacts/jit-hook-build/Makefile 2025-08-14T21:35:19.6531580Z inflating: build/custom_test_artifacts/jit-hook-build/cmake_install.cmake 2025-08-14T21:35:19.6554397Z inflating: build/custom_test_artifacts/jit-hook-build/test_jit_hooks 2025-08-14T21:35:19.6554930Z creating: build/custom_test_artifacts/custom-backend-build/ 2025-08-14T21:35:19.6555412Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/ 2025-08-14T21:35:19.6555804Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/pkgRedirects/ 2025-08-14T21:35:19.6557971Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/CMakeConfigureLog.yaml 2025-08-14T21:35:19.6558606Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/4.0.0/ 2025-08-14T21:35:19.6559176Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/4.0.0/CMakeSystem.cmake 2025-08-14T21:35:19.6559772Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/4.0.0/CompilerIdC/ 2025-08-14T21:35:19.6560203Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/4.0.0/CompilerIdC/tmp/ 2025-08-14T21:35:19.6560686Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/4.0.0/CompilerIdC/CMakeCCompilerId.c 2025-08-14T21:35:19.6561309Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/4.0.0/CompilerIdC/a.out 2025-08-14T21:35:19.6562757Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/4.0.0/CMakeCCompiler.cmake 2025-08-14T21:35:19.6563307Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/4.0.0/CompilerIdCXX/ 2025-08-14T21:35:19.6563772Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/4.0.0/CompilerIdCXX/tmp/ 2025-08-14T21:35:19.6564486Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/4.0.0/CompilerIdCXX/CMakeCXXCompilerId.cpp 2025-08-14T21:35:19.6565271Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/4.0.0/CompilerIdCXX/a.out 2025-08-14T21:35:19.6566019Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/4.0.0/CMakeCXXCompiler.cmake 2025-08-14T21:35:19.6568756Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/4.0.0/CMakeDetermineCompilerABI_C.bin 2025-08-14T21:35:19.6569515Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/4.0.0/CMakeDetermineCompilerABI_CXX.bin 2025-08-14T21:35:19.6570135Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/CMakeScratch/ 2025-08-14T21:35:19.6570714Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/cmake.check_cache 2025-08-14T21:35:19.6571170Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/ 2025-08-14T21:35:19.6571697Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/compiler_depend.ts 2025-08-14T21:35:19.6572261Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/compiler_depend.make 2025-08-14T21:35:19.6572794Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/depend.make 2025-08-14T21:35:19.6573288Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/link.txt 2025-08-14T21:35:19.6574010Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/cmake_clean.cmake 2025-08-14T21:35:19.6574507Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/build.make 2025-08-14T21:35:19.6575012Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/DependInfo.cmake 2025-08-14T21:35:19.6575520Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/flags.make 2025-08-14T21:35:19.6576013Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/progress.make 2025-08-14T21:35:19.6576540Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/custom_backend.cpp.o.d 2025-08-14T21:35:19.6685454Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/custom_backend.cpp.o 2025-08-14T21:35:19.6687397Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/ 2025-08-14T21:35:19.6688138Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/compiler_depend.ts 2025-08-14T21:35:19.6691499Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/compiler_depend.make 2025-08-14T21:35:19.6692221Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/depend.make 2025-08-14T21:35:19.6695355Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/link.txt 2025-08-14T21:35:19.6695990Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/cmake_clean.cmake 2025-08-14T21:35:19.6696640Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/build.make 2025-08-14T21:35:19.6697168Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/DependInfo.cmake 2025-08-14T21:35:19.6697964Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/flags.make 2025-08-14T21:35:19.6698473Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/progress.make 2025-08-14T21:35:19.6702314Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/test_custom_backend.cpp.o.d 2025-08-14T21:35:19.6752479Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/test_custom_backend.cpp.o 2025-08-14T21:35:19.6757754Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/CMakeDirectoryInformation.cmake 2025-08-14T21:35:19.6758519Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/TargetDirectories.txt 2025-08-14T21:35:19.6759056Z extracting: build/custom_test_artifacts/custom-backend-build/CMakeFiles/progress.marks 2025-08-14T21:35:19.6759537Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/Makefile2 2025-08-14T21:35:19.6760013Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/Makefile.cmake 2025-08-14T21:35:19.6760504Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/InstallScripts.json 2025-08-14T21:35:19.6761007Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeCache.txt 2025-08-14T21:35:19.6761351Z inflating: build/custom_test_artifacts/custom-backend-build/Makefile 2025-08-14T21:35:19.6761702Z inflating: build/custom_test_artifacts/custom-backend-build/cmake_install.cmake 2025-08-14T21:35:19.6843810Z inflating: build/custom_test_artifacts/custom-backend-build/libcustom_backend.so 2025-08-14T21:35:19.6879918Z inflating: build/custom_test_artifacts/custom-backend-build/test_custom_backend 2025-08-14T21:35:19.6881685Z creating: build/lib/ 2025-08-14T21:35:19.6953296Z inflating: build/lib/libprotobuf-lite.a 2025-08-14T21:35:19.7355582Z inflating: build/lib/libprotobuf.a 2025-08-14T21:35:19.7814836Z inflating: build/lib/libprotoc.a 2025-08-14T21:35:19.7824434Z inflating: build/lib/libpthreadpool.a 2025-08-14T21:35:19.7831839Z inflating: build/lib/libcpuinfo.a 2025-08-14T21:35:19.7838413Z inflating: build/lib/libcpuinfo_internals.a 2025-08-14T21:35:19.7838830Z inflating: build/lib/libclog.a 2025-08-14T21:35:19.7857115Z inflating: build/lib/libpytorch_qnnpack.a 2025-08-14T21:35:19.7858135Z inflating: build/lib/libnnpack_reference_layers.a 2025-08-14T21:35:19.8038129Z inflating: build/lib/libmicrokernels-prod.a 2025-08-14T21:35:19.8050001Z inflating: build/lib/libnnpack.a 2025-08-14T21:35:19.8853187Z inflating: build/lib/libmicrokernels-all.a 2025-08-14T21:35:19.8919551Z inflating: build/lib/libgtest.a 2025-08-14T21:35:19.8934055Z inflating: build/lib/libgmock.a 2025-08-14T21:35:19.8935724Z inflating: build/lib/libgmock_main.a 2025-08-14T21:35:19.8936013Z inflating: build/lib/libgtest_main.a 2025-08-14T21:35:19.9018632Z inflating: build/lib/libXNNPACK.a 2025-08-14T21:35:19.9089007Z inflating: build/lib/libbenchmark.a 2025-08-14T21:35:19.9089553Z inflating: build/lib/libbenchmark_main.a 2025-08-14T21:35:19.9090392Z inflating: build/lib/libjitprofiling.a 2025-08-14T21:35:19.9148411Z inflating: build/lib/libasmjit.a 2025-08-14T21:35:19.9154760Z inflating: build/lib/libittnotify.a 2025-08-14T21:35:20.0179190Z inflating: build/lib/libfbgemm.a 2025-08-14T21:35:20.0204228Z inflating: build/lib/libtensorpipe_uv.a 2025-08-14T21:35:20.0698272Z inflating: build/lib/libtensorpipe.a 2025-08-14T21:35:20.0809639Z inflating: build/lib/libgloo.a 2025-08-14T21:35:20.0851750Z inflating: build/lib/libonnx_proto.a 2025-08-14T21:35:20.1483405Z inflating: build/lib/libonnx.a 2025-08-14T21:35:21.0901150Z inflating: build/lib/libdnnl.a 2025-08-14T21:35:21.0919071Z inflating: build/lib/libfmt.a 2025-08-14T21:35:21.1159347Z inflating: build/lib/libkineto.a 2025-08-14T21:35:21.1268345Z inflating: build/lib/libc10.so 2025-08-14T21:35:21.1270720Z inflating: build/lib/libtorch_global_deps.so 2025-08-14T21:35:23.9307477Z inflating: build/lib/libtorch_cpu.so 2025-08-14T21:35:23.9307835Z inflating: build/lib/libtorch.so 2025-08-14T21:35:23.9374200Z inflating: build/lib/libtorchbind_test.so 2025-08-14T21:35:23.9392927Z inflating: build/lib/libjitbackend_test.so 2025-08-14T21:35:23.9414800Z inflating: build/lib/libbackend_with_compiler.so 2025-08-14T21:35:23.9439803Z inflating: build/lib/libaoti_custom_ops.so 2025-08-14T21:35:23.9440243Z inflating: build/lib/libshm.so 2025-08-14T21:35:24.1227084Z inflating: build/lib/libtorch_python.so 2025-08-14T21:35:24.1255622Z inflating: build/lib/libnnapi_backend.so 2025-08-14T21:35:24.1257753Z creating: build/bin/ 2025-08-14T21:35:24.1258103Z creating: build/bin/CMakeFiles/ 2025-08-14T21:35:24.1258457Z inflating: build/bin/cmake_install.cmake 2025-08-14T21:35:24.1258714Z inflating: build/bin/CTestTestfile.cmake 2025-08-14T21:35:24.1668823Z inflating: build/bin/protoc-3.13.0.0 2025-08-14T21:35:24.2081558Z inflating: build/bin/protoc 2025-08-14T21:35:24.2130915Z inflating: build/bin/c10_AllocatorConfig_test 2025-08-14T21:35:24.2184191Z inflating: build/bin/c10_CompileTimeFunctionPointer_test 2025-08-14T21:35:24.2235378Z inflating: build/bin/c10_DeviceGuard_test 2025-08-14T21:35:24.2285887Z inflating: build/bin/c10_Device_test 2025-08-14T21:35:24.2335580Z inflating: build/bin/c10_StreamGuard_test 2025-08-14T21:35:24.2394586Z inflating: build/bin/c10_DispatchKeySet_test 2025-08-14T21:35:24.2447279Z inflating: build/bin/c10_SymInt_test 2025-08-14T21:35:24.2499398Z inflating: build/bin/c10_Scalar_test 2025-08-14T21:35:24.2557136Z inflating: build/bin/c10_InlineDeviceGuard_test 2025-08-14T21:35:24.2614556Z inflating: build/bin/c10_InlineStreamGuard_test 2025-08-14T21:35:24.2670863Z inflating: build/bin/c10_SizesAndStrides_test 2025-08-14T21:35:24.2725541Z inflating: build/bin/c10_Bitset_test 2025-08-14T21:35:24.2795139Z inflating: build/bin/c10_cow_test 2025-08-14T21:35:24.2849196Z inflating: build/bin/c10_ArrayRef_test 2025-08-14T21:35:24.2901185Z inflating: build/bin/c10_ConstexprCrc_test 2025-08-14T21:35:24.2958250Z inflating: build/bin/c10_DeadlockDetection_test 2025-08-14T21:35:24.3018996Z inflating: build/bin/c10_Enumerate_test 2025-08-14T21:35:24.3076027Z inflating: build/bin/c10_Half_test 2025-08-14T21:35:24.3131770Z inflating: build/bin/c10_IntrusiveList_test 2025-08-14T21:35:24.3191033Z inflating: build/bin/c10_LeftRight_test 2025-08-14T21:35:24.3253399Z inflating: build/bin/c10_Metaprogramming_test 2025-08-14T21:35:24.3312097Z inflating: build/bin/c10_NetworkFlow_test 2025-08-14T21:35:24.3368291Z inflating: build/bin/c10_Synchronized_test 2025-08-14T21:35:24.3418903Z inflating: build/bin/c10_Semaphore_test 2025-08-14T21:35:24.3473113Z inflating: build/bin/c10_TypeIndex_test 2025-08-14T21:35:24.3532342Z inflating: build/bin/c10_ThreadLocal_test 2025-08-14T21:35:24.3587446Z inflating: build/bin/c10_TypeList_test 2025-08-14T21:35:24.3639194Z inflating: build/bin/c10_TypeTraits_test 2025-08-14T21:35:24.3697664Z inflating: build/bin/c10_accumulate_test 2025-08-14T21:35:24.3759462Z inflating: build/bin/c10_bfloat16_test 2025-08-14T21:35:24.3818249Z inflating: build/bin/c10_complex_test 2025-08-14T21:35:24.3879887Z inflating: build/bin/c10_complex_math_test 2025-08-14T21:35:24.3934369Z inflating: build/bin/c10_bit_cast_test 2025-08-14T21:35:24.3988588Z inflating: build/bin/c10_error_test 2025-08-14T21:35:24.4043974Z inflating: build/bin/c10_exception_test 2025-08-14T21:35:24.4100750Z inflating: build/bin/c10_flags_test 2025-08-14T21:35:24.4156968Z inflating: build/bin/c10_irange_test 2025-08-14T21:35:24.4211009Z inflating: build/bin/c10_generic_math_test 2025-08-14T21:35:24.4371559Z inflating: build/bin/c10_intrusive_ptr_test 2025-08-14T21:35:24.4427116Z inflating: build/bin/c10_lazy_test 2025-08-14T21:35:24.4487811Z inflating: build/bin/c10_logging_test 2025-08-14T21:35:24.4551233Z inflating: build/bin/c10_ordered_preserving_dict_test 2025-08-14T21:35:24.4627184Z inflating: build/bin/c10_optional_test 2025-08-14T21:35:24.4684938Z inflating: build/bin/c10_registry_test 2025-08-14T21:35:24.4840112Z inflating: build/bin/c10_small_vector_test 2025-08-14T21:35:24.4898765Z inflating: build/bin/c10_string_util_test 2025-08-14T21:35:24.4953556Z inflating: build/bin/c10_ssize_test 2025-08-14T21:35:24.5006256Z inflating: build/bin/c10_string_view_test 2025-08-14T21:35:24.5061723Z inflating: build/bin/c10_tempfile_test 2025-08-14T21:35:24.5125179Z inflating: build/bin/c10_typeid_test 2025-08-14T21:35:24.5170854Z inflating: build/bin/c10_intrusive_ptr_benchmark 2025-08-14T21:35:24.5725959Z inflating: build/bin/vec_test_all_types_DEFAULT 2025-08-14T21:35:24.6318773Z inflating: build/bin/vec_test_all_types_AVX512 2025-08-14T21:35:24.6904646Z inflating: build/bin/vec_test_all_types_AVX2 2025-08-14T21:35:24.6959612Z inflating: build/bin/static_runtime_bench 2025-08-14T21:35:24.7201428Z inflating: build/bin/static_runtime_test 2025-08-14T21:35:24.7277584Z inflating: build/bin/Dict_test 2025-08-14T21:35:24.7331871Z inflating: build/bin/Dimname_test 2025-08-14T21:35:24.7395918Z inflating: build/bin/MaybeOwned_test 2025-08-14T21:35:24.7455611Z inflating: build/bin/NamedTensor_test 2025-08-14T21:35:24.7516411Z inflating: build/bin/apply_utils_test 2025-08-14T21:35:24.7576650Z inflating: build/bin/atest 2025-08-14T21:35:24.7643826Z inflating: build/bin/basic 2025-08-14T21:35:24.7700490Z inflating: build/bin/broadcast_test 2025-08-14T21:35:24.7752264Z inflating: build/bin/cpu_allocator_test 2025-08-14T21:35:24.7809130Z inflating: build/bin/cpu_generator_test 2025-08-14T21:35:24.7869662Z inflating: build/bin/cpu_profiling_allocator_test 2025-08-14T21:35:24.7964235Z inflating: build/bin/cpu_rng_test 2025-08-14T21:35:24.8017726Z inflating: build/bin/dlconvertor_test 2025-08-14T21:35:24.8080701Z inflating: build/bin/extension_backend_test 2025-08-14T21:35:24.8139727Z inflating: build/bin/half_test 2025-08-14T21:35:24.8238363Z inflating: build/bin/ivalue_test 2025-08-14T21:35:24.8288169Z inflating: build/bin/lazy_tensor_test 2025-08-14T21:35:24.8347321Z inflating: build/bin/math_kernel_test 2025-08-14T21:35:24.8400244Z inflating: build/bin/memory_format_test 2025-08-14T21:35:24.8453722Z inflating: build/bin/memory_overlapping_test 2025-08-14T21:35:24.8509408Z inflating: build/bin/mobile_memory_cleanup 2025-08-14T21:35:24.8568256Z inflating: build/bin/native_test 2025-08-14T21:35:24.8622606Z inflating: build/bin/operator_name_test 2025-08-14T21:35:24.8672417Z inflating: build/bin/operators_test 2025-08-14T21:35:24.8728151Z inflating: build/bin/packedtensoraccessor_test 2025-08-14T21:35:24.8800408Z inflating: build/bin/pow_test 2025-08-14T21:35:24.8860754Z inflating: build/bin/quantized_test 2025-08-14T21:35:24.8912600Z inflating: build/bin/reduce_ops_test 2025-08-14T21:35:24.8965460Z inflating: build/bin/reportMemoryUsage_test 2025-08-14T21:35:24.9020747Z inflating: build/bin/scalar_tensor_test 2025-08-14T21:35:24.9083376Z inflating: build/bin/scalar_test 2025-08-14T21:35:24.9134347Z inflating: build/bin/StorageUtils_test 2025-08-14T21:35:24.9187828Z inflating: build/bin/stride_properties_test 2025-08-14T21:35:24.9266087Z inflating: build/bin/tensor_iterator_test 2025-08-14T21:35:24.9323520Z inflating: build/bin/test_parallel 2025-08-14T21:35:24.9377539Z inflating: build/bin/thread_init_test 2025-08-14T21:35:24.9435879Z inflating: build/bin/type_ptr_test 2025-08-14T21:35:24.9500227Z inflating: build/bin/type_test 2025-08-14T21:35:24.9556097Z inflating: build/bin/undefined_tensor_test 2025-08-14T21:35:24.9608605Z inflating: build/bin/verify_api_visibility 2025-08-14T21:35:24.9688819Z inflating: build/bin/legacy_vmap_test 2025-08-14T21:35:24.9738434Z inflating: build/bin/weakref_test 2025-08-14T21:35:24.9798161Z inflating: build/bin/wrapdim_test 2025-08-14T21:35:24.9853546Z inflating: build/bin/xla_tensor_test 2025-08-14T21:35:24.9925099Z inflating: build/bin/IListRef_test 2025-08-14T21:35:25.0034301Z inflating: build/bin/List_test 2025-08-14T21:35:25.0112171Z inflating: build/bin/KernelFunction_test 2025-08-14T21:35:25.0241306Z inflating: build/bin/kernel_function_legacy_test 2025-08-14T21:35:25.0340378Z inflating: build/bin/kernel_function_test 2025-08-14T21:35:25.0475219Z inflating: build/bin/kernel_lambda_legacy_test 2025-08-14T21:35:25.0585613Z inflating: build/bin/kernel_lambda_test 2025-08-14T21:35:25.0651285Z inflating: build/bin/kernel_stackbased_test 2025-08-14T21:35:25.0754449Z inflating: build/bin/make_boxed_from_unboxed_functor_test 2025-08-14T21:35:25.0811497Z inflating: build/bin/CppSignature_test 2025-08-14T21:35:25.0875439Z inflating: build/bin/backend_fallback_test 2025-08-14T21:35:25.0929327Z inflating: build/bin/op_allowlist_test 2025-08-14T21:35:25.1242314Z inflating: build/bin/op_registration_test 2025-08-14T21:35:25.1311414Z inflating: build/bin/inline_container_test 2025-08-14T21:35:25.2375918Z inflating: build/bin/test_jit 2025-08-14T21:35:25.2677153Z inflating: build/bin/test_nativert 2025-08-14T21:35:25.2732105Z inflating: build/bin/BackoffTest 2025-08-14T21:35:25.2785745Z inflating: build/bin/FileStoreTest 2025-08-14T21:35:25.2845062Z inflating: build/bin/TCPStoreTest 2025-08-14T21:35:25.2900709Z inflating: build/bin/HashStoreTest 2025-08-14T21:35:25.2970842Z inflating: build/bin/ProcessGroupGlooTest 2025-08-14T21:35:25.2975153Z inflating: build/bin/example_allreduce 2025-08-14T21:35:25.3031305Z inflating: build/bin/test_dist_autograd 2025-08-14T21:35:25.3100929Z inflating: build/bin/test_cpp_rpc 2025-08-14T21:35:25.4169454Z inflating: build/bin/test_api 2025-08-14T21:35:25.4170549Z inflating: build/bin/parallel_benchmark 2025-08-14T21:35:25.4499472Z inflating: build/bin/test_lazy 2025-08-14T21:35:25.4500249Z inflating: build/bin/torch_shm_manager 2025-08-14T21:35:25.4501025Z creating: .additional_ci_files/ 2025-08-14T21:35:25.4577917Z inflating: .additional_ci_files/test-times.json 2025-08-14T21:35:25.4860301Z inflating: .additional_ci_files/test-class-times.json 2025-08-14T21:35:25.4898500Z ##[group]Run rm artifacts.zip 2025-08-14T21:35:25.4898761Z rm artifacts.zip 2025-08-14T21:35:25.4903696Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:35:25.4903945Z env: 2025-08-14T21:35:25.4904117Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:35:25.4904291Z ##[endgroup] 2025-08-14T21:35:25.5230953Z ##[group]Run df -H 2025-08-14T21:35:25.5231178Z df -H 2025-08-14T21:35:25.5235766Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:35:25.5236015Z env: 2025-08-14T21:35:25.5236192Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:35:25.5236370Z ##[endgroup] 2025-08-14T21:35:25.5281535Z Filesystem Size Used Avail Use% Mounted on 2025-08-14T21:35:25.5283447Z devtmpfs 4.2M 0 4.2M 0% /dev 2025-08-14T21:35:25.5290221Z tmpfs 67G 0 67G 0% /dev/shm 2025-08-14T21:35:25.5295212Z tmpfs 27G 791k 27G 1% /run 2025-08-14T21:35:25.5295635Z /dev/nvme0n1p1 215G 69G 147G 32% / 2025-08-14T21:35:25.5295967Z tmpfs 67G 13k 67G 1% /tmp 2025-08-14T21:35:25.5296239Z /dev/nvme0n1p128 11M 1.4M 9.2M 13% /boot/efi 2025-08-14T21:35:25.5320204Z Prepare all required actions 2025-08-14T21:35:25.5320970Z Getting action download info 2025-08-14T21:35:25.6515407Z ##[group]Run ./.github/actions/download-td-artifacts 2025-08-14T21:35:25.6515726Z with: 2025-08-14T21:35:25.6515912Z env: 2025-08-14T21:35:25.6516107Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:35:25.6516325Z ##[endgroup] 2025-08-14T21:35:25.7938068Z ##[group]Run seemethere/download-artifact-s3@v4 2025-08-14T21:35:25.7938316Z with: 2025-08-14T21:35:25.7938466Z name: td_results 2025-08-14T21:35:25.7938640Z s3-bucket: gha-artifacts 2025-08-14T21:35:25.7938822Z region: us-east-1 2025-08-14T21:35:25.7938974Z env: 2025-08-14T21:35:25.7939123Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:35:25.7939295Z ##[endgroup] 2025-08-14T21:35:26.1478616Z (node:47871) NOTE: We are formalizing our plans to enter AWS SDK for JavaScript (v2) into maintenance mode in 2023. 2025-08-14T21:35:26.1483789Z 2025-08-14T21:35:26.1488731Z Please migrate your code to use AWS SDK for JavaScript (v3). 2025-08-14T21:35:26.1493529Z For more information, check the migration guide at https://a.co/7PzMCcy 2025-08-14T21:35:26.1495938Z (Use `node --trace-warnings ...` to show where the warning was created) 2025-08-14T21:35:26.2283214Z Found 0 objects with prefix pytorch/pytorch/16976338999/td_results/ 2025-08-14T21:35:26.2283825Z Artifact download has finished successfully 2025-08-14T21:35:26.2478642Z ##[group]Run mkdir -p .additional_ci_files 2025-08-14T21:35:26.2478909Z mkdir -p .additional_ci_files 2025-08-14T21:35:26.2479190Z mv td_results.json .additional_ci_files/td_results.json || true 2025-08-14T21:35:26.2484041Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:35:26.2484290Z env: 2025-08-14T21:35:26.2484449Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:35:26.2484627Z ##[endgroup] 2025-08-14T21:35:26.2538456Z mv: cannot stat 'td_results.json': No such file or directory 2025-08-14T21:35:26.2561206Z ##[group]Run .github/scripts/parse_ref.py 2025-08-14T21:35:26.2561487Z .github/scripts/parse_ref.py 2025-08-14T21:35:26.2566252Z shell: /usr/bin/bash -e {0} 2025-08-14T21:35:26.2566448Z env: 2025-08-14T21:35:26.2566609Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:35:26.2566931Z ##[endgroup] 2025-08-14T21:35:26.2762065Z Setting output branch=main 2025-08-14T21:35:26.2850788Z Prepare all required actions 2025-08-14T21:35:26.2851190Z Getting action download info 2025-08-14T21:35:26.4001804Z ##[group]Run ./.github/actions/filter-test-configs 2025-08-14T21:35:26.4002102Z with: 2025-08-14T21:35:26.4002596Z github-token: *** 2025-08-14T21:35:26.4007088Z test-matrix: {"include": [{"config": "cpu_inductor_freezing_huggingface", "shard": 1, "num_shards": 1, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_freezing_timm", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_freezing_timm", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_freezing_torchbench", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_freezing_torchbench", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_amp_freezing_huggingface", "shard": 1, "num_shards": 1, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_amp_freezing_timm", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_amp_freezing_timm", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_amp_freezing_torchbench", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_amp_freezing_torchbench", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_aot_inductor_freezing_huggingface", "shard": 1, "num_shards": 1, "runner": "linux.8xlarge.amx"}, {"config": "cpu_aot_inductor_freezing_timm", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_aot_inductor_freezing_timm", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_aot_inductor_freezing_torchbench", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_aot_inductor_freezing_torchbench", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_aot_inductor_amp_freezing_torchbench", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_aot_inductor_amp_freezing_torchbench", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_aot_inductor_freezing_torchbench", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_aot_inductor_freezing_torchbench", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_aot_inductor_amp_freezing_torchbench", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_aot_inductor_amp_freezing_torchbench", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}]} 2025-08-14T21:35:26.4012011Z job-name: linux-jammy-cpu-py3.9-gcc11-inductor / test (cpu_inductor_freezing_huggingface, 1, 1, linux.8xlarge.amx) 2025-08-14T21:35:26.4012439Z env: 2025-08-14T21:35:26.4012628Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:35:26.4012846Z ##[endgroup] 2025-08-14T21:35:26.4040150Z ##[group]Run nick-fields/retry@v3.0.0 2025-08-14T21:35:26.4040386Z with: 2025-08-14T21:35:26.4040561Z shell: bash 2025-08-14T21:35:26.4040734Z timeout_minutes: 10 2025-08-14T21:35:26.4040904Z max_attempts: 5 2025-08-14T21:35:26.4041082Z retry_wait_seconds: 30 2025-08-14T21:35:26.4041588Z command: set -eux # PyYAML 6.0 doesn't work with MacOS x86 anymore # This must run on Python-3.7 (AmazonLinux2) so can't use request=3.32.2 python3 -m pip install requests==2.27.1 pyyaml==6.0.2 2025-08-14T21:35:26.4042436Z polling_interval_seconds: 1 2025-08-14T21:35:26.4042641Z warning_on_retry: true 2025-08-14T21:35:26.4042835Z continue_on_error: false 2025-08-14T21:35:26.4043034Z env: 2025-08-14T21:35:26.4043191Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:35:26.4043573Z GITHUB_TOKEN: *** 2025-08-14T21:35:26.4043754Z ##[endgroup] 2025-08-14T21:35:26.4940067Z + python3 -m pip install requests==2.27.1 pyyaml==6.0.2 2025-08-14T21:35:26.6892214Z Defaulting to user installation because normal site-packages is not writeable 2025-08-14T21:35:26.7808947Z Collecting requests==2.27.1 2025-08-14T21:35:26.7976226Z Downloading requests-2.27.1-py2.py3-none-any.whl (63 kB) 2025-08-14T21:35:26.9245186Z Collecting pyyaml==6.0.2 2025-08-14T21:35:26.9292041Z Downloading PyYAML-6.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (737 kB) 2025-08-14T21:35:26.9583115Z Requirement already satisfied: idna<4,>=2.5 in /usr/lib/python3.9/site-packages (from requests==2.27.1) (2.10) 2025-08-14T21:35:27.2250035Z Collecting charset-normalizer~=2.0.0 2025-08-14T21:35:27.2287969Z Downloading charset_normalizer-2.0.12-py3-none-any.whl (39 kB) 2025-08-14T21:35:27.2699108Z Collecting certifi>=2017.4.17 2025-08-14T21:35:27.2740781Z Downloading certifi-2025.8.3-py3-none-any.whl (161 kB) 2025-08-14T21:35:27.2824561Z Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/lib/python3.9/site-packages (from requests==2.27.1) (1.25.10) 2025-08-14T21:35:27.3458967Z Installing collected packages: charset-normalizer, certifi, requests, pyyaml 2025-08-14T21:35:27.4631118Z Successfully installed certifi-2025.8.3 charset-normalizer-2.0.12 pyyaml-6.0.2 requests-2.27.1 2025-08-14T21:35:28.4698113Z Command completed after 1 attempt(s). 2025-08-14T21:35:28.4766189Z ##[group]Run set -x 2025-08-14T21:35:28.4766392Z set -x 2025-08-14T21:35:28.4766539Z  2025-08-14T21:35:28.4766789Z # Use relative path here as this could be checked out anywhere, not necessarily 2025-08-14T21:35:28.4767082Z # in runner workspace 2025-08-14T21:35:28.4767329Z python3 "${GITHUB_ACTION_PATH}/../../scripts/parse_ref.py" 2025-08-14T21:35:28.4772422Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:35:28.4772667Z env: 2025-08-14T21:35:28.4772824Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:35:28.4772998Z ##[endgroup] 2025-08-14T21:35:28.4830431Z + python3 /home/ec2-user/actions-runner/_work/pytorch/pytorch/./.github/actions/filter-test-configs/../../scripts/parse_ref.py 2025-08-14T21:35:28.4951185Z Setting output branch=main 2025-08-14T21:35:28.5017137Z ##[group]Run echo "Workflow: ${GITHUB_WORKFLOW}" 2025-08-14T21:35:28.5017443Z echo "Workflow: ${GITHUB_WORKFLOW}" 2025-08-14T21:35:28.5017667Z echo "Job name: ${JOB_NAME}" 2025-08-14T21:35:28.5017867Z  2025-08-14T21:35:28.5018111Z # Use relative path here as this could be checked out anywhere, not necessarily 2025-08-14T21:35:28.5018410Z # in runner workspace 2025-08-14T21:35:28.5018691Z python3 "${GITHUB_ACTION_PATH}/../../scripts/filter_test_configs.py" \ 2025-08-14T21:35:28.5018993Z  --workflow "${GITHUB_WORKFLOW}" \ 2025-08-14T21:35:28.5019215Z  --job-name "${JOB_NAME}" \ 2025-08-14T21:35:28.5023649Z  --test-matrix "{"include": [{"config": "cpu_inductor_freezing_huggingface", "shard": 1, "num_shards": 1, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_freezing_timm", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_freezing_timm", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_freezing_torchbench", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_freezing_torchbench", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_amp_freezing_huggingface", "shard": 1, "num_shards": 1, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_amp_freezing_timm", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_amp_freezing_timm", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_amp_freezing_torchbench", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_amp_freezing_torchbench", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_aot_inductor_freezing_huggingface", "shard": 1, "num_shards": 1, "runner": "linux.8xlarge.amx"}, {"config": "cpu_aot_inductor_freezing_timm", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_aot_inductor_freezing_timm", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_aot_inductor_freezing_torchbench", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_aot_inductor_freezing_torchbench", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_aot_inductor_amp_freezing_torchbench", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_aot_inductor_amp_freezing_torchbench", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_aot_inductor_freezing_torchbench", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_aot_inductor_freezing_torchbench", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_aot_inductor_amp_freezing_torchbench", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_aot_inductor_amp_freezing_torchbench", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}]}" \ 2025-08-14T21:35:28.5028115Z  --selected-test-configs "" \ 2025-08-14T21:35:28.5028364Z  --pr-number "${PR_NUMBER}" \ 2025-08-14T21:35:28.5028601Z  --tag "${TAG}" \ 2025-08-14T21:35:28.5028825Z  --event-name "${EVENT_NAME}" \ 2025-08-14T21:35:28.5029058Z  --schedule "${SCHEDULE}" \ 2025-08-14T21:35:28.5029290Z  --branch "${HEAD_BRANCH}" 2025-08-14T21:35:28.5034274Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:35:28.5034538Z env: 2025-08-14T21:35:28.5034698Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:35:28.5035214Z GITHUB_TOKEN: *** 2025-08-14T21:35:28.5035586Z JOB_NAME: linux-jammy-cpu-py3.9-gcc11-inductor / test (cpu_inductor_freezing_huggingface, 1, 1, linux.8xlarge.amx) 2025-08-14T21:35:28.5035958Z PR_NUMBER: 2025-08-14T21:35:28.5036124Z TAG: 2025-08-14T21:35:28.5036283Z EVENT_NAME: schedule 2025-08-14T21:35:28.5036470Z SCHEDULE: 45 0,4,8,12,16,20 * * 1-5 2025-08-14T21:35:28.5036682Z HEAD_BRANCH: main 2025-08-14T21:35:28.5036856Z ##[endgroup] 2025-08-14T21:35:28.5063270Z Workflow: inductor-periodic 2025-08-14T21:35:28.5068234Z Job name: linux-jammy-cpu-py3.9-gcc11-inductor / test (cpu_inductor_freezing_huggingface, 1, 1, linux.8xlarge.amx) 2025-08-14T21:35:28.6602926Z Setting output keep-going=True 2025-08-14T21:35:28.6604677Z Setting output ci-verbose-test-logs=False 2025-08-14T21:35:28.6605121Z Setting output ci-test-showlocals=False 2025-08-14T21:35:28.6610142Z Setting output ci-no-test-timeout=False 2025-08-14T21:35:28.6612918Z Setting output ci-no-td=False 2025-08-14T21:35:28.6613406Z Setting output ci-td-distributed=False 2025-08-14T21:35:28.6613716Z Setting output is-unstable=False 2025-08-14T21:35:28.6614027Z Setting output reenabled-issues= 2025-08-14T21:35:28.6618795Z Setting output test-matrix={"include": [{"config": "cpu_inductor_freezing_huggingface", "shard": 1, "num_shards": 1, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_freezing_timm", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_freezing_timm", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_freezing_torchbench", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_freezing_torchbench", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_amp_freezing_huggingface", "shard": 1, "num_shards": 1, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_amp_freezing_timm", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_amp_freezing_timm", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_amp_freezing_torchbench", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_amp_freezing_torchbench", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_aot_inductor_freezing_huggingface", "shard": 1, "num_shards": 1, "runner": "linux.8xlarge.amx"}, {"config": "cpu_aot_inductor_freezing_timm", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_aot_inductor_freezing_timm", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_aot_inductor_freezing_torchbench", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_aot_inductor_freezing_torchbench", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_aot_inductor_amp_freezing_torchbench", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_aot_inductor_amp_freezing_torchbench", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_aot_inductor_freezing_torchbench", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_aot_inductor_freezing_torchbench", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_aot_inductor_amp_freezing_torchbench", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_aot_inductor_amp_freezing_torchbench", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}]} 2025-08-14T21:35:28.6623739Z Setting output is-test-matrix-empty=False 2025-08-14T21:35:28.6744842Z ##[group]Run echo "Filtered matrix:" 2025-08-14T21:35:28.6745093Z echo "Filtered matrix:" 2025-08-14T21:35:28.6749043Z echo "{"include": [{"config": "cpu_inductor_freezing_huggingface", "shard": 1, "num_shards": 1, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_freezing_timm", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_freezing_timm", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_freezing_torchbench", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_freezing_torchbench", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_amp_freezing_huggingface", "shard": 1, "num_shards": 1, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_amp_freezing_timm", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_amp_freezing_timm", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_amp_freezing_torchbench", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_inductor_amp_freezing_torchbench", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_aot_inductor_freezing_huggingface", "shard": 1, "num_shards": 1, "runner": "linux.8xlarge.amx"}, {"config": "cpu_aot_inductor_freezing_timm", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_aot_inductor_freezing_timm", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_aot_inductor_freezing_torchbench", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_aot_inductor_freezing_torchbench", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_aot_inductor_amp_freezing_torchbench", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "cpu_aot_inductor_amp_freezing_torchbench", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_aot_inductor_freezing_torchbench", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_aot_inductor_freezing_torchbench", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_aot_inductor_amp_freezing_torchbench", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_aot_inductor_amp_freezing_torchbench", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}]}" 2025-08-14T21:35:28.6753175Z  2025-08-14T21:35:28.6753328Z echo 2025-08-14T21:35:28.6753534Z echo "Is the current job unstable? False" 2025-08-14T21:35:28.6753769Z  2025-08-14T21:35:28.6753917Z echo 2025-08-14T21:35:28.6754209Z echo "Is keep-going label set? True" 2025-08-14T21:35:28.6754527Z  2025-08-14T21:35:28.6754675Z echo 2025-08-14T21:35:28.6754853Z echo "Reenabled issues? " 2025-08-14T21:35:28.6759606Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:35:28.6759883Z env: 2025-08-14T21:35:28.6760039Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:35:28.6760228Z ##[endgroup] 2025-08-14T21:35:28.6784811Z Filtered matrix: 2025-08-14T21:35:28.6789363Z {include: [{config: cpu_inductor_freezing_huggingface, shard: 1, num_shards: 1, runner: linux.8xlarge.amx}, {config: cpu_inductor_freezing_timm, shard: 1, num_shards: 2, runner: linux.8xlarge.amx}, {config: cpu_inductor_freezing_timm, shard: 2, num_shards: 2, runner: linux.8xlarge.amx}, {config: cpu_inductor_freezing_torchbench, shard: 1, num_shards: 2, runner: linux.8xlarge.amx}, {config: cpu_inductor_freezing_torchbench, shard: 2, num_shards: 2, runner: linux.8xlarge.amx}, {config: cpu_inductor_amp_freezing_huggingface, shard: 1, num_shards: 1, runner: linux.8xlarge.amx}, {config: cpu_inductor_amp_freezing_timm, shard: 1, num_shards: 2, runner: linux.8xlarge.amx}, {config: cpu_inductor_amp_freezing_timm, shard: 2, num_shards: 2, runner: linux.8xlarge.amx}, {config: cpu_inductor_amp_freezing_torchbench, shard: 1, num_shards: 2, runner: linux.8xlarge.amx}, {config: cpu_inductor_amp_freezing_torchbench, shard: 2, num_shards: 2, runner: linux.8xlarge.amx}, {config: cpu_aot_inductor_freezing_huggingface, shard: 1, num_shards: 1, runner: linux.8xlarge.amx}, {config: cpu_aot_inductor_freezing_timm, shard: 1, num_shards: 2, runner: linux.8xlarge.amx}, {config: cpu_aot_inductor_freezing_timm, shard: 2, num_shards: 2, runner: linux.8xlarge.amx}, {config: cpu_aot_inductor_freezing_torchbench, shard: 1, num_shards: 2, runner: linux.8xlarge.amx}, {config: cpu_aot_inductor_freezing_torchbench, shard: 2, num_shards: 2, runner: linux.8xlarge.amx}, {config: cpu_aot_inductor_amp_freezing_torchbench, shard: 1, num_shards: 2, runner: linux.8xlarge.amx}, {config: cpu_aot_inductor_amp_freezing_torchbench, shard: 2, num_shards: 2, runner: linux.8xlarge.amx}, {config: dynamic_cpu_aot_inductor_freezing_torchbench, shard: 1, num_shards: 2, runner: linux.8xlarge.amx}, {config: dynamic_cpu_aot_inductor_freezing_torchbench, shard: 2, num_shards: 2, runner: linux.8xlarge.amx}, {config: dynamic_cpu_aot_inductor_amp_freezing_torchbench, shard: 1, num_shards: 2, runner: linux.8xlarge.amx}, {config: dynamic_cpu_aot_inductor_amp_freezing_torchbench, shard: 2, num_shards: 2, runner: linux.8xlarge.amx}]} 2025-08-14T21:35:28.6793399Z 2025-08-14T21:35:28.6793484Z Is the current job unstable? False 2025-08-14T21:35:28.6793631Z 2025-08-14T21:35:28.6793712Z Is keep-going label set? True 2025-08-14T21:35:28.6793836Z 2025-08-14T21:35:28.6793912Z Reenabled issues? 2025-08-14T21:35:28.6870556Z ##[group]Run echo "timeout=$((JOB_TIMEOUT-30))" >> "${GITHUB_OUTPUT}" 2025-08-14T21:35:28.6870885Z echo "timeout=$((JOB_TIMEOUT-30))" >> "${GITHUB_OUTPUT}" 2025-08-14T21:35:28.6875347Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:35:28.6875586Z env: 2025-08-14T21:35:28.6875742Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:35:28.6875906Z JOB_TIMEOUT: 240 2025-08-14T21:35:28.6876063Z ##[endgroup] 2025-08-14T21:35:28.8651948Z ##[group]Run env | grep '^GITHUB' >> "/tmp/github_env_${GITHUB_RUN_ID}" 2025-08-14T21:35:28.8652313Z env | grep '^GITHUB' >> "/tmp/github_env_${GITHUB_RUN_ID}" 2025-08-14T21:35:28.8652597Z env | grep '^CI' >> "/tmp/github_env_${GITHUB_RUN_ID}" 2025-08-14T21:35:28.8657643Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T21:35:28.8657903Z env: 2025-08-14T21:35:28.8658075Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:35:28.8658264Z ##[endgroup] 2025-08-14T21:35:28.8766713Z ##[group]Run set -x 2025-08-14T21:35:28.8766981Z set -x 2025-08-14T21:35:28.8767142Z  2025-08-14T21:35:28.8767321Z if [[ $TEST_CONFIG == 'multigpu' ]]; then 2025-08-14T21:35:28.8767686Z  TEST_COMMAND=.ci/pytorch/multigpu-test.sh 2025-08-14T21:35:28.8767942Z elif [[ $BUILD_ENVIRONMENT == *onnx* ]]; then 2025-08-14T21:35:28.8768172Z  TEST_COMMAND=.ci/onnx/test.sh 2025-08-14T21:35:28.8768367Z else 2025-08-14T21:35:28.8768533Z  TEST_COMMAND=.ci/pytorch/test.sh 2025-08-14T21:35:28.8768732Z fi 2025-08-14T21:35:28.8768876Z  2025-08-14T21:35:28.8769046Z # Leaving 1GB for the runner and other things 2025-08-14T21:35:28.8769396Z TOTAL_AVAILABLE_MEMORY_IN_GB=$(awk '/MemTotal/ { printf "%.3f \n", $2/1024/1024 - 1 }' /proc/meminfo) 2025-08-14T21:35:28.8769928Z # https://docs.docker.com/engine/containers/resource_constraints/#--memory-swap-details, the 3GB swap 2025-08-14T21:35:28.8770336Z # comes from https://github.com/pytorch/test-infra/pull/6058 2025-08-14T21:35:28.8770659Z TOTAL_MEMORY_WITH_SWAP=$(("${TOTAL_AVAILABLE_MEMORY_IN_GB%.*}" + 3)) 2025-08-14T21:35:28.8770915Z  2025-08-14T21:35:28.8771088Z if [[ ${BUILD_ENVIRONMENT} == *"s390x"* ]]; then 2025-08-14T21:35:28.8771310Z  SHM_OPTS= 2025-08-14T21:35:28.8771480Z  JENKINS_USER= 2025-08-14T21:35:28.8771706Z  # ensure that docker container cleanly exits in 12 hours 2025-08-14T21:35:28.8771987Z  # if for some reason cleanup action doesn't stop container 2025-08-14T21:35:28.8772232Z  # when job is cancelled 2025-08-14T21:35:28.8772432Z  DOCKER_SHELL_CMD="sleep 12h" 2025-08-14T21:35:28.8772617Z else 2025-08-14T21:35:28.8772793Z  SHM_OPTS="--shm-size=${SHM_SIZE}" 2025-08-14T21:35:28.8773008Z  JENKINS_USER="--user jenkins" 2025-08-14T21:35:28.8773209Z  DOCKER_SHELL_CMD= 2025-08-14T21:35:28.8773377Z fi 2025-08-14T21:35:28.8773520Z  2025-08-14T21:35:28.8773734Z # detached container should get cleaned up by teardown_ec2_linux 2025-08-14T21:35:28.8774048Z # TODO: Stop building test binaries as part of the build phase 2025-08-14T21:35:28.8774418Z # Used for GPU_FLAG, SHM_OPTS, JENKINS_USER and DOCKER_SHELL_CMD since that doesn't play nice 2025-08-14T21:35:28.8774742Z # shellcheck disable=SC2086,SC2090 2025-08-14T21:35:28.8774962Z container_name=$(docker run \ 2025-08-14T21:35:28.8775163Z  ${GPU_FLAG:-} \ 2025-08-14T21:35:28.8775377Z  ${SCCACHE_SERVER_PORT_DOCKER_FLAG:-} \ 2025-08-14T21:35:28.8775602Z  -e BUILD_ENVIRONMENT \ 2025-08-14T21:35:28.8775791Z  -e PR_NUMBER \ 2025-08-14T21:35:28.8775976Z  -e GITHUB_ACTIONS \ 2025-08-14T21:35:28.8776171Z  -e GITHUB_REPOSITORY \ 2025-08-14T21:35:28.8776361Z  -e GITHUB_WORKFLOW \ 2025-08-14T21:35:28.8776554Z  -e GITHUB_JOB \ 2025-08-14T21:35:28.8776737Z  -e GITHUB_RUN_ID \ 2025-08-14T21:35:28.8776920Z  -e GITHUB_RUN_NUMBER \ 2025-08-14T21:35:28.8777119Z  -e GITHUB_RUN_ATTEMPT \ 2025-08-14T21:35:28.8777319Z  -e JOB_ID \ 2025-08-14T21:35:28.8777494Z  -e JOB_NAME \ 2025-08-14T21:35:28.8777665Z  -e BASE_SHA \ 2025-08-14T21:35:28.8777834Z  -e BRANCH \ 2025-08-14T21:35:28.8778000Z  -e SHA1 \ 2025-08-14T21:35:28.8778163Z  -e AWS_DEFAULT_REGION \ 2025-08-14T21:35:28.8778358Z  -e IN_WHEEL_TEST \ 2025-08-14T21:35:28.8778540Z  -e SHARD_NUMBER \ 2025-08-14T21:35:28.8778718Z  -e TEST_CONFIG \ 2025-08-14T21:35:28.8778904Z  -e NUM_TEST_SHARDS \ 2025-08-14T21:35:28.8779095Z  -e REENABLED_ISSUES \ 2025-08-14T21:35:28.8779293Z  -e CONTINUE_THROUGH_ERROR \ 2025-08-14T21:35:28.8779864Z  -e VERBOSE_TEST_LOGS \ 2025-08-14T21:35:28.8780097Z  -e TEST_SHOWLOCALS \ 2025-08-14T21:35:28.8780306Z  -e NO_TEST_TIMEOUT \ 2025-08-14T21:35:28.8780500Z  -e NO_TD \ 2025-08-14T21:35:28.8780686Z  -e TD_DISTRIBUTED \ 2025-08-14T21:35:28.8782099Z  -e PR_LABELS \ 2025-08-14T21:35:28.8782296Z  -e MAX_JOBS="$(nproc --ignore=2)" \ 2025-08-14T21:35:28.8782520Z  -e SCCACHE_BUCKET \ 2025-08-14T21:35:28.8782711Z  -e SCCACHE_REGION \ 2025-08-14T21:35:28.8782888Z  -e XLA_CUDA \ 2025-08-14T21:35:28.8783089Z  -e XLA_CLANG_CACHE_S3_BUCKET_NAME \ 2025-08-14T21:35:28.8783327Z  -e PYTORCH_TEST_CUDA_MEM_LEAK_CHECK \ 2025-08-14T21:35:28.8783564Z  -e PYTORCH_TEST_RERUN_DISABLED_TESTS \ 2025-08-14T21:35:28.8783795Z  -e SKIP_SCCACHE_INITIALIZATION=1 \ 2025-08-14T21:35:28.8784019Z  -e HUGGING_FACE_HUB_TOKEN \ 2025-08-14T21:35:28.8784236Z  -e SCRIBE_GRAPHQL_ACCESS_TOKEN \ 2025-08-14T21:35:28.8784440Z  -e DASHBOARD_TAG \ 2025-08-14T21:35:28.8784636Z  -e ARTIFACTS_FILE_SUFFIX \ 2025-08-14T21:35:28.8784874Z  --memory="${TOTAL_AVAILABLE_MEMORY_IN_GB%.*}g" \ 2025-08-14T21:35:28.8785147Z  --memory-swap="${TOTAL_MEMORY_WITH_SWAP}g" \ 2025-08-14T21:35:28.8785407Z  --env-file="/tmp/github_env_${GITHUB_RUN_ID}" \ 2025-08-14T21:35:28.8785666Z  --security-opt seccomp=unconfined \ 2025-08-14T21:35:28.8785886Z  --cap-add=SYS_PTRACE \ 2025-08-14T21:35:28.8786075Z  --ipc=host \ 2025-08-14T21:35:28.8786251Z  ${SHM_OPTS} \ 2025-08-14T21:35:28.8786423Z  --tty \ 2025-08-14T21:35:28.8786580Z  --detach \ 2025-08-14T21:35:28.8786765Z  --name="${container_name}" \ 2025-08-14T21:35:28.8786971Z  ${JENKINS_USER} \ 2025-08-14T21:35:28.8787204Z  -v "${GITHUB_WORKSPACE}:/var/lib/jenkins/workspace" \ 2025-08-14T21:35:28.8787458Z  -w /var/lib/jenkins/workspace \ 2025-08-14T21:35:28.8787670Z  "${DOCKER_IMAGE}" \ 2025-08-14T21:35:28.8787857Z  ${DOCKER_SHELL_CMD} 2025-08-14T21:35:28.8788030Z ) 2025-08-14T21:35:28.8788241Z # Propagate download.pytorch.org IP to container 2025-08-14T21:35:28.8788656Z grep download.pytorch.org /etc/hosts | docker exec -i "${container_name}" sudo bash -c "/bin/cat >> /etc/hosts" 2025-08-14T21:35:28.8789087Z echo "DOCKER_CONTAINER_ID=${container_name}" >> "${GITHUB_ENV}" 2025-08-14T21:35:28.8789339Z  2025-08-14T21:35:28.8789522Z if [[ ${BUILD_ENVIRONMENT} == *"s390x"* ]]; then 2025-08-14T21:35:28.8789878Z  docker exec -t "${container_name}" sh -c "python3 -m pip install -r .ci/docker/requirements-ci.txt" 2025-08-14T21:35:28.8790181Z fi 2025-08-14T21:35:28.8790327Z  2025-08-14T21:35:28.8790633Z docker exec -t "${container_name}" sh -c "python3 -m pip install $(echo dist/*.whl)[opt-einsum] && ${TEST_COMMAND}" 2025-08-14T21:35:28.8795494Z shell: /usr/bin/bash -e {0} 2025-08-14T21:35:28.8795671Z env: 2025-08-14T21:35:28.8795827Z GIT_DEFAULT_BRANCH: main 2025-08-14T21:35:28.8796053Z BUILD_ENVIRONMENT: linux-jammy-py3.9-gcc11-build 2025-08-14T21:35:28.8796282Z PR_NUMBER: 2025-08-14T21:35:28.8796455Z GITHUB_REPOSITORY: pytorch/pytorch 2025-08-14T21:35:28.8796669Z GITHUB_WORKFLOW: inductor-periodic 2025-08-14T21:35:28.8796862Z GITHUB_JOB: test 2025-08-14T21:35:28.8797030Z GITHUB_RUN_ID: 16976338999 2025-08-14T21:35:28.8797215Z GITHUB_RUN_NUMBER: 66307 2025-08-14T21:35:28.8797387Z GITHUB_RUN_ATTEMPT: 1 2025-08-14T21:35:28.8797560Z JOB_ID: 48128261046 2025-08-14T21:35:28.8797913Z JOB_NAME: linux-jammy-cpu-py3.9-gcc11-inductor / test (cpu_inductor_freezing_huggingface, 1, 1, linux.8xlarge.amx) 2025-08-14T21:35:28.8798277Z BRANCH: main 2025-08-14T21:35:28.8798453Z SHA1: 1fc683cf17c8c673044538d10266c00f92987be2 2025-08-14T21:35:28.8798761Z BASE_SHA: 1fc683cf17c8c673044538d10266c00f92987be2 2025-08-14T21:35:28.8799010Z TEST_CONFIG: cpu_inductor_freezing_huggingface 2025-08-14T21:35:28.8799217Z SHARD_NUMBER: 1 2025-08-14T21:35:28.8799380Z NUM_TEST_SHARDS: 1 2025-08-14T21:35:28.8799597Z REENABLED_ISSUES: 2025-08-14T21:35:28.8799767Z CONTINUE_THROUGH_ERROR: True 2025-08-14T21:35:28.8799958Z VERBOSE_TEST_LOGS: False 2025-08-14T21:35:28.8800142Z TEST_SHOWLOCALS: False 2025-08-14T21:35:28.8800316Z NO_TEST_TIMEOUT: False 2025-08-14T21:35:28.8800488Z NO_TD: False 2025-08-14T21:35:28.8800644Z TD_DISTRIBUTED: False 2025-08-14T21:35:28.8800857Z SCCACHE_BUCKET: ossci-compiler-cache-circleci-v2 2025-08-14T21:35:28.8801087Z SCCACHE_REGION: us-east-1 2025-08-14T21:35:28.8801263Z SHM_SIZE: 1g 2025-08-14T21:35:28.8801770Z DOCKER_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T21:35:28.8802288Z XLA_CUDA: 2025-08-14T21:35:28.8802540Z XLA_CLANG_CACHE_S3_BUCKET_NAME: ossci-compiler-clang-cache-circleci-xla 2025-08-14T21:35:28.8802836Z PYTORCH_TEST_CUDA_MEM_LEAK_CHECK: 0 2025-08-14T21:35:28.8803053Z PYTORCH_TEST_RERUN_DISABLED_TESTS: 0 2025-08-14T21:35:28.8803250Z DASHBOARD_TAG: 2025-08-14T21:35:28.8803611Z HUGGING_FACE_HUB_TOKEN: *** 2025-08-14T21:35:28.8803891Z SCRIBE_GRAPHQL_ACCESS_TOKEN: *** 2025-08-14T21:35:28.8804230Z ARTIFACTS_FILE_SUFFIX: test-cpu_inductor_freezing_huggingface-1-1-linux.8xlarge.amx_48128261046 2025-08-14T21:35:28.8804559Z ##[endgroup] 2025-08-14T21:35:28.8834168Z + [[ cpu_inductor_freezing_huggingface == \m\u\l\t\i\g\p\u ]] 2025-08-14T21:35:28.8834527Z + [[ linux-jammy-py3.9-gcc11-build == *onnx* ]] 2025-08-14T21:35:28.8834788Z + TEST_COMMAND=.ci/pytorch/test.sh 2025-08-14T21:35:28.8835079Z ++ awk '/MemTotal/ { printf "%.3f \n", $2/1024/1024 - 1 }' /proc/meminfo 2025-08-14T21:35:28.8855374Z + TOTAL_AVAILABLE_MEMORY_IN_GB='122.780 ' 2025-08-14T21:35:28.8855689Z + TOTAL_MEMORY_WITH_SWAP=125 2025-08-14T21:35:28.8855946Z + [[ linux-jammy-py3.9-gcc11-build == *\s\3\9\0\x* ]] 2025-08-14T21:35:28.8856211Z + SHM_OPTS=--shm-size=1g 2025-08-14T21:35:28.8856415Z + JENKINS_USER='--user jenkins' 2025-08-14T21:35:28.8856607Z + DOCKER_SHELL_CMD= 2025-08-14T21:35:28.8868234Z +++ nproc --ignore=2 2025-08-14T21:35:28.9261043Z ++ docker run -e BUILD_ENVIRONMENT -e PR_NUMBER -e GITHUB_ACTIONS -e GITHUB_REPOSITORY -e GITHUB_WORKFLOW -e GITHUB_JOB -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RUN_ATTEMPT -e JOB_ID -e JOB_NAME -e BASE_SHA -e BRANCH -e SHA1 -e AWS_DEFAULT_REGION -e IN_WHEEL_TEST -e SHARD_NUMBER -e TEST_CONFIG -e NUM_TEST_SHARDS -e REENABLED_ISSUES -e CONTINUE_THROUGH_ERROR -e VERBOSE_TEST_LOGS -e TEST_SHOWLOCALS -e NO_TEST_TIMEOUT -e NO_TD -e TD_DISTRIBUTED -e PR_LABELS -e MAX_JOBS=30 -e SCCACHE_BUCKET -e SCCACHE_REGION -e XLA_CUDA -e XLA_CLANG_CACHE_S3_BUCKET_NAME -e PYTORCH_TEST_CUDA_MEM_LEAK_CHECK -e PYTORCH_TEST_RERUN_DISABLED_TESTS -e SKIP_SCCACHE_INITIALIZATION=1 -e HUGGING_FACE_HUB_TOKEN -e SCRIBE_GRAPHQL_ACCESS_TOKEN -e DASHBOARD_TAG -e ARTIFACTS_FILE_SUFFIX --memory=122g --memory-swap=125g --env-file=/tmp/github_env_16976338999 --security-opt seccomp=unconfined --cap-add=SYS_PTRACE --ipc=host --shm-size=1g --tty --detach --name= --user jenkins -v /home/ec2-user/actions-runner/_work/pytorch/pytorch:/var/lib/jenkins/workspace -w /var/lib/jenkins/workspace 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T21:35:40.4342190Z + container_name=a7aa204eccbcf3a0e6886619bb590755eb27ea6b069c8a28798d748638509f93 2025-08-14T21:35:40.4347962Z + grep download.pytorch.org /etc/hosts 2025-08-14T21:35:40.4350142Z + docker exec -i a7aa204eccbcf3a0e6886619bb590755eb27ea6b069c8a28798d748638509f93 sudo bash -c '/bin/cat >> /etc/hosts' 2025-08-14T21:35:40.5837559Z + echo DOCKER_CONTAINER_ID=a7aa204eccbcf3a0e6886619bb590755eb27ea6b069c8a28798d748638509f93 2025-08-14T21:35:40.5838707Z + [[ linux-jammy-py3.9-gcc11-build == *\s\3\9\0\x* ]] 2025-08-14T21:35:40.5846434Z ++ echo dist/torch-2.9.0a0+git1fc683c-cp39-cp39-linux_x86_64.whl 2025-08-14T21:35:40.5847455Z + docker exec -t a7aa204eccbcf3a0e6886619bb590755eb27ea6b069c8a28798d748638509f93 sh -c 'python3 -m pip install dist/torch-2.9.0a0+git1fc683c-cp39-cp39-linux_x86_64.whl[opt-einsum] && .ci/pytorch/test.sh' 2025-08-14T21:35:41.0058127Z Processing ./dist/torch-2.9.0a0+git1fc683c-cp39-cp39-linux_x86_64.whl (from torch==2.9.0a0+git1fc683c) 2025-08-14T21:35:41.2197720Z Requirement already satisfied: filelock in /opt/conda/envs/py_3.9/lib/python3.9/site-packages (from torch==2.9.0a0+git1fc683c->torch==2.9.0a0+git1fc683c) (3.18.0) 2025-08-14T21:35:41.2198653Z Requirement already satisfied: typing-extensions>=4.10.0 in /opt/conda/envs/py_3.9/lib/python3.9/site-packages (from torch==2.9.0a0+git1fc683c->torch==2.9.0a0+git1fc683c) (4.14.1) 2025-08-14T21:35:41.2205878Z Requirement already satisfied: sympy>=1.13.3 in /opt/conda/envs/py_3.9/lib/python3.9/site-packages (from torch==2.9.0a0+git1fc683c->torch==2.9.0a0+git1fc683c) (1.13.3) 2025-08-14T21:35:41.2206776Z Requirement already satisfied: networkx>=2.5.1 in /opt/conda/envs/py_3.9/lib/python3.9/site-packages (from torch==2.9.0a0+git1fc683c->torch==2.9.0a0+git1fc683c) (2.8.8) 2025-08-14T21:35:41.2215435Z Requirement already satisfied: jinja2 in /opt/conda/envs/py_3.9/lib/python3.9/site-packages (from torch==2.9.0a0+git1fc683c->torch==2.9.0a0+git1fc683c) (3.1.6) 2025-08-14T21:35:41.2220466Z Requirement already satisfied: fsspec>=0.8.5 in /opt/conda/envs/py_3.9/lib/python3.9/site-packages (from torch==2.9.0a0+git1fc683c->torch==2.9.0a0+git1fc683c) (2025.3.0) 2025-08-14T21:35:41.2227746Z Requirement already satisfied: opt-einsum>=3.3 in /opt/conda/envs/py_3.9/lib/python3.9/site-packages (from torch==2.9.0a0+git1fc683c->torch==2.9.0a0+git1fc683c) (3.3.0) 2025-08-14T21:35:41.2512783Z Requirement already satisfied: numpy>=1.7 in /opt/conda/envs/py_3.9/lib/python3.9/site-packages (from opt-einsum>=3.3->torch==2.9.0a0+git1fc683c->torch==2.9.0a0+git1fc683c) (1.22.4) 2025-08-14T21:35:41.2532575Z Requirement already satisfied: mpmath<1.4,>=1.1.0 in /opt/conda/envs/py_3.9/lib/python3.9/site-packages (from sympy>=1.13.3->torch==2.9.0a0+git1fc683c->torch==2.9.0a0+git1fc683c) (1.3.0) 2025-08-14T21:35:41.2579438Z Requirement already satisfied: MarkupSafe>=2.0 in /opt/conda/envs/py_3.9/lib/python3.9/site-packages (from jinja2->torch==2.9.0a0+git1fc683c->torch==2.9.0a0+git1fc683c) (3.0.2) 2025-08-14T21:35:42.0124219Z Installing collected packages: torch 2025-08-14T21:35:49.3108473Z ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. 2025-08-14T21:35:49.3109132Z dall-e 0.1 requires torchvision, which is not installed. 2025-08-14T21:35:49.3109459Z effdet 0.4.1 requires torchvision, which is not installed. 2025-08-14T21:35:49.3109890Z pytorch-labs-segment-anything-fast 0.2 requires torchao, which is not installed. 2025-08-14T21:35:49.3110424Z pytorch-labs-segment-anything-fast 0.2 requires torchvision>=0.17.0.dev20231026, which is not installed. 2025-08-14T21:35:49.3110945Z timm 1.0.14 requires torchvision, which is not installed. 2025-08-14T21:35:49.3111314Z Successfully installed torch-2.9.0a0+git1fc683c 2025-08-14T21:35:49.4356367Z + export TERM=vt100 2025-08-14T21:35:49.4358437Z + TERM=vt100 2025-08-14T21:35:49.4358782Z ++ dirname .ci/pytorch/test.sh 2025-08-14T21:35:49.4368209Z + source .ci/pytorch/common.sh 2025-08-14T21:35:49.4369258Z +++ dirname .ci/pytorch/common.sh 2025-08-14T21:35:49.4377913Z ++ source .ci/pytorch/common_utils.sh 2025-08-14T21:35:49.4378505Z +++ declare -f -t trap_add 2025-08-14T21:35:49.4379841Z ++ set -ex -o pipefail 2025-08-14T21:35:49.4380102Z ++ [[ linux-jammy-py3.9-gcc11-build == *rocm* ]] 2025-08-14T21:35:49.4380369Z ++ BUILD_TEST_LIBTORCH=0 2025-08-14T21:35:49.4389237Z ++ dirname .ci/pytorch/test.sh 2025-08-14T21:35:49.4394145Z + source .ci/pytorch/common-build.sh 2025-08-14T21:35:49.4395002Z ++ [[ linux-jammy-py3.9-gcc11-build != *win-* ]] 2025-08-14T21:35:49.4407069Z ++++ dirname .ci/pytorch/common-build.sh 2025-08-14T21:35:49.4415539Z +++ cd .ci/pytorch 2025-08-14T21:35:49.4415930Z +++ pwd -P 2025-08-14T21:35:49.4416711Z ++ script_dir=/var/lib/jenkins/workspace/.ci/pytorch 2025-08-14T21:35:49.4417456Z ++ [[ linux-jammy-py3.9-gcc11-build == *-pch* ]] 2025-08-14T21:35:49.4417758Z ++ which sccache 2025-08-14T21:35:49.4437390Z ++ [[ -z ossci-compiler-cache-circleci-v2 ]] 2025-08-14T21:35:49.4437690Z ++ sccache --stop-server 2025-08-14T21:35:49.4461003Z ++ true 2025-08-14T21:35:49.4461241Z ++ rm -f /var/lib/jenkins/sccache_error.log 2025-08-14T21:35:49.4480710Z ++ trap_add sccache_epilogue EXIT 2025-08-14T21:35:49.4485646Z ++ trap_add_cmd=sccache_epilogue 2025-08-14T21:35:49.4489601Z ++ shift 2025-08-14T21:35:49.4494421Z ++ for trap_add_name in "$@" 2025-08-14T21:35:49.4498357Z ++++ trap -p EXIT 2025-08-14T21:35:49.4498589Z +++ eval 'extract_trap_cmd ' 2025-08-14T21:35:49.4499140Z ++++ extract_trap_cmd 2025-08-14T21:35:49.4499385Z ++++ printf '%s\n' '' 2025-08-14T21:35:49.4499769Z +++ printf '%s\n' sccache_epilogue 2025-08-14T21:35:49.4499986Z ++ trap -- ' 2025-08-14T21:35:49.4500166Z sccache_epilogue' EXIT 2025-08-14T21:35:49.4500369Z ++ [[ -n 1 ]] 2025-08-14T21:35:49.4500648Z ++ echo 'Skipping sccache server initialization, setting environment variables' 2025-08-14T21:35:49.4501057Z Skipping sccache server initialization, setting environment variables 2025-08-14T21:35:49.4501357Z ++ export SCCACHE_IDLE_TIMEOUT=0 2025-08-14T21:35:49.4501569Z ++ SCCACHE_IDLE_TIMEOUT=0 2025-08-14T21:35:49.4501812Z ++ export SCCACHE_ERROR_LOG=/var/lib/jenkins/sccache_error.log 2025-08-14T21:35:49.4502113Z ++ SCCACHE_ERROR_LOG=/var/lib/jenkins/sccache_error.log 2025-08-14T21:35:49.4502442Z ++ export RUST_LOG=sccache::server=error 2025-08-14T21:35:49.4502664Z ++ RUST_LOG=sccache::server=error 2025-08-14T21:35:49.4502876Z ++ sccache --zero-stats 2025-08-14T21:35:49.6066524Z Statistics zeroed. 2025-08-14T21:35:49.6071317Z ++ which ccache 2025-08-14T21:35:49.6090591Z + [[ linux-jammy-py3.9-gcc11-build != *rocm* ]] 2025-08-14T21:35:49.6096409Z + [[ linux-jammy-py3.9-gcc11-build != *s390x* ]] 2025-08-14T21:35:49.6096813Z + [[ -d /var/lib/jenkins/workspace ]] 2025-08-14T21:35:49.6097108Z ++ stat -c %u /var/lib/jenkins/workspace 2025-08-14T21:35:49.6110744Z + WORKSPACE_ORIGINAL_OWNER_ID=1000 2025-08-14T21:35:49.6111170Z + trap_add cleanup_workspace EXIT 2025-08-14T21:35:49.6111695Z + trap_add_cmd=cleanup_workspace 2025-08-14T21:35:49.6111908Z + shift 2025-08-14T21:35:49.6112081Z + for trap_add_name in "$@" 2025-08-14T21:35:49.6119386Z +++ trap -p EXIT 2025-08-14T21:35:49.6122171Z ++ eval 'extract_trap_cmd trap -- '\'' 2025-08-14T21:35:49.6122485Z sccache_epilogue'\'' EXIT' 2025-08-14T21:35:49.6122724Z +++ extract_trap_cmd trap -- ' 2025-08-14T21:35:49.6122941Z sccache_epilogue' EXIT 2025-08-14T21:35:49.6123143Z +++ printf '%s\n' ' 2025-08-14T21:35:49.6123315Z sccache_epilogue' 2025-08-14T21:35:49.6123527Z ++ printf '%s\n' cleanup_workspace 2025-08-14T21:35:49.6124557Z + trap -- ' 2025-08-14T21:35:49.6124748Z sccache_epilogue 2025-08-14T21:35:49.6124935Z cleanup_workspace' EXIT 2025-08-14T21:35:49.6125172Z + sudo chown -R jenkins /var/lib/jenkins/workspace 2025-08-14T21:35:50.0505653Z + git config --global --add safe.directory /var/lib/jenkins/workspace 2025-08-14T21:35:50.0527389Z + echo 'Environment variables:' 2025-08-14T21:35:50.0529051Z Environment variables: 2025-08-14T21:35:50.0534591Z + env 2025-08-14T21:35:50.0540226Z GITHUB_WORKSPACE=/home/ec2-user/actions-runner/_work/pytorch/pytorch 2025-08-14T21:35:50.0540645Z CONTINUE_THROUGH_ERROR=True 2025-08-14T21:35:50.0540917Z BUILD_ENVIRONMENT=linux-jammy-py3.9-gcc11-build 2025-08-14T21:35:50.0541169Z HOSTNAME=a7aa204eccbc 2025-08-14T21:35:50.0541582Z GITHUB_PATH=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/add_path_5bcaec0a-cd0b-4277-ad1b-7179b99972eb 2025-08-14T21:35:50.0542166Z GITHUB_ACTION=__run_2 2025-08-14T21:35:50.0542702Z PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=0 2025-08-14T21:35:50.0542931Z GITHUB_RUN_NUMBER=66307 2025-08-14T21:35:50.0543162Z TEST_CONFIG=cpu_inductor_freezing_huggingface 2025-08-14T21:35:50.0543419Z GITHUB_REPOSITORY_OWNER_ID=21003710 2025-08-14T21:35:50.0543768Z TORCH_NVCC_FLAGS=-Xfatbin -compress-all 2025-08-14T21:35:50.0543993Z SCCACHE_IDLE_TIMEOUT=0 2025-08-14T21:35:50.0544447Z SCRIBE_GRAPHQL_ACCESS_TOKEN=*** 2025-08-14T21:35:50.0544667Z GITHUB_TRIGGERING_ACTOR=pytorchmergebot 2025-08-14T21:35:50.0544894Z GITHUB_REF_TYPE=branch 2025-08-14T21:35:50.0545083Z TORCH_CUDA_ARCH_LIST=Maxwell 2025-08-14T21:35:50.0545313Z BASE_SHA=1fc683cf17c8c673044538d10266c00f92987be2 2025-08-14T21:35:50.0545538Z XLA_CUDA= 2025-08-14T21:35:50.0545715Z NCCL_LIB_DIR=/usr/local/cuda/lib64/ 2025-08-14T21:35:50.0546000Z HUGGING_FACE_HUB_TOKEN=*** 2025-08-14T21:35:50.0551445Z *** 2025-08-14T21:35:50.0551666Z GITHUB_REPOSITORY_ID=65600975 2025-08-14T21:35:50.0551887Z GITHUB_ACTIONS=true 2025-08-14T21:35:50.0552126Z SCCACHE_ERROR_LOG=/var/lib/jenkins/sccache_error.log 2025-08-14T21:35:50.0552392Z SHA1=1fc683cf17c8c673044538d10266c00f92987be2 2025-08-14T21:35:50.0552644Z GITHUB_SHA=1fc683cf17c8c673044538d10266c00f92987be2 2025-08-14T21:35:50.0553016Z GITHUB_WORKFLOW_REF=pytorch/pytorch/.github/workflows/inductor-periodic.yml@refs/heads/main 2025-08-14T21:35:50.0553410Z UCC_HOME=/usr 2025-08-14T21:35:50.0553580Z VERBOSE_TEST_LOGS=False 2025-08-14T21:35:50.0553769Z GITHUB_REF=refs/heads/main 2025-08-14T21:35:50.0553949Z SHARD_NUMBER=1 2025-08-14T21:35:50.0554124Z GITHUB_REF_PROTECTED=true 2025-08-14T21:35:50.0554314Z HOME=/var/lib/jenkins 2025-08-14T21:35:50.0554517Z GITHUB_API_URL=https://api.github.com 2025-08-14T21:35:50.0554754Z PYTORCH_TEST_RERUN_DISABLED_TESTS=0 2025-08-14T21:35:50.0554959Z UCX_COMMIT= 2025-08-14T21:35:50.0555109Z USE_SYSTEM_NCCL=1 2025-08-14T21:35:50.0555278Z NUM_TEST_SHARDS=1 2025-08-14T21:35:50.0555443Z UCX_HOME=/usr 2025-08-14T21:35:50.0555817Z GITHUB_STATE=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/save_state_5bcaec0a-cd0b-4277-ad1b-7179b99972eb 2025-08-14T21:35:50.0556377Z JOB_NAME=linux-jammy-cpu-py3.9-gcc11-inductor / test (cpu_inductor_freezing_huggingface, 1, 1, linux.8xlarge.amx) 2025-08-14T21:35:50.0556914Z GITHUB_ENV=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/set_env_5bcaec0a-cd0b-4277-ad1b-7179b99972eb 2025-08-14T21:35:50.0557398Z GITHUB_EVENT_PATH=/home/ec2-user/actions-runner/_work/_temp/_github_workflow/event.json 2025-08-14T21:35:50.0557709Z GITHUB_EVENT_NAME=schedule 2025-08-14T21:35:50.0557901Z DASHBOARD_TAG= 2025-08-14T21:35:50.0558072Z GITHUB_RUN_ID=16976338999 2025-08-14T21:35:50.0558257Z INSTALLED_OPENBLAS= 2025-08-14T21:35:50.0558651Z GITHUB_STEP_SUMMARY=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/step_summary_5bcaec0a-cd0b-4277-ad1b-7179b99972eb 2025-08-14T21:35:50.0559059Z GITHUB_ACTOR=pytorchmergebot 2025-08-14T21:35:50.0559254Z PR_NUMBER= 2025-08-14T21:35:50.0559403Z DESIRED_CUDA= 2025-08-14T21:35:50.0559586Z GITHUB_RUN_ATTEMPT=1 2025-08-14T21:35:50.0559778Z ANACONDA_PYTHON_VERSION=3.9 2025-08-14T21:35:50.0560014Z GITHUB_GRAPHQL_URL=https://api.github.com/graphql 2025-08-14T21:35:50.0560246Z TERM=vt100 2025-08-14T21:35:50.0560406Z INSTALLED_VISION=yes 2025-08-14T21:35:50.0560569Z BRANCH=main 2025-08-14T21:35:50.0560728Z SCCACHE_REGION=us-east-1 2025-08-14T21:35:50.0560906Z OPENSSL_ROOT_DIR=/opt/openssl 2025-08-14T21:35:50.0561098Z CUDA_PATH=/usr/local/cuda 2025-08-14T21:35:50.0561478Z GITHUB_ACTION_PATH=/home/ec2-user/actions-runner/_work/pytorch/pytorch/./.github/actions/setup-linux 2025-08-14T21:35:50.0561831Z GITHUB_SERVER_URL=https://github.com 2025-08-14T21:35:50.0562033Z UCC_COMMIT= 2025-08-14T21:35:50.0562189Z REENABLED_ISSUES= 2025-08-14T21:35:50.0562344Z DOCS=yes 2025-08-14T21:35:50.0562495Z SHLVL=1 2025-08-14T21:35:50.0562642Z MAX_JOBS=30 2025-08-14T21:35:50.0562791Z GITHUB_ACTOR_ID=97764156 2025-08-14T21:35:50.0563029Z GITHUB_WORKFLOW_SHA=1fc683cf17c8c673044538d10266c00f92987be2 2025-08-14T21:35:50.0563281Z GITHUB_REF_NAME=main 2025-08-14T21:35:50.0563688Z XLA_CLANG_CACHE_S3_BUCKET_NAME=ossci-compiler-clang-cache-circleci-xla 2025-08-14T21:35:50.0563980Z GITHUB_JOB=test 2025-08-14T21:35:50.0564142Z NO_TEST_TIMEOUT=False 2025-08-14T21:35:50.0564311Z TD_DISTRIBUTED=False 2025-08-14T21:35:50.0564549Z GITHUB_REPOSITORY=pytorch/pytorch 2025-08-14T21:35:50.0564746Z GITHUB_RETENTION_DAYS=90 2025-08-14T21:35:50.0564922Z OPENSSL_DIR=/opt/openssl 2025-08-14T21:35:50.0565093Z GITHUB_ACTION_REPOSITORY= 2025-08-14T21:35:50.0565584Z PATH=/opt/cache/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/opt/conda/envs/py_3.9/bin:/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 2025-08-14T21:35:50.0566065Z GITHUB_BASE_REF= 2025-08-14T21:35:50.0566224Z INSTALLED_ACL= 2025-08-14T21:35:50.0566535Z ARTIFACTS_FILE_SUFFIX=test-cpu_inductor_freezing_huggingface-1-1-linux.8xlarge.amx_48128261046 2025-08-14T21:35:50.0566887Z CI=true 2025-08-14T21:35:50.0567037Z GITHUB_REPOSITORY_OWNER=pytorch 2025-08-14T21:35:50.0567288Z RUST_LOG=sccache::server=error 2025-08-14T21:35:50.0567474Z JOB_ID=48128261046 2025-08-14T21:35:50.0567630Z GITHUB_HEAD_REF= 2025-08-14T21:35:50.0567779Z GITHUB_ACTION_REF= 2025-08-14T21:35:50.0567974Z SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2 2025-08-14T21:35:50.0568216Z TEST_SHOWLOCALS=False 2025-08-14T21:35:50.0568401Z GITHUB_WORKFLOW=inductor-periodic 2025-08-14T21:35:50.0568617Z DEBIAN_FRONTEND=noninteractive 2025-08-14T21:35:50.0569014Z GITHUB_OUTPUT=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/set_output_5bcaec0a-cd0b-4277-ad1b-7179b99972eb 2025-08-14T21:35:50.0569401Z NO_TD=False 2025-08-14T21:35:50.0569573Z SKIP_SCCACHE_INITIALIZATION=1 2025-08-14T21:35:50.0569792Z NCCL_INCLUDE_DIR=/usr/local/cuda/include/ 2025-08-14T21:35:50.0570000Z _=/usr/bin/env 2025-08-14T21:35:50.0570237Z ++ python -c 'import site; print(site.getsitepackages()[0])' 2025-08-14T21:35:50.0850158Z + TORCH_INSTALL_DIR=/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch 2025-08-14T21:35:50.0855952Z + TORCH_BIN_DIR=/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/bin 2025-08-14T21:35:50.0857456Z + TORCH_LIB_DIR=/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/lib 2025-08-14T21:35:50.0857871Z + TORCH_TEST_DIR=/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/test 2025-08-14T21:35:50.0858188Z + BUILD_DIR=build 2025-08-14T21:35:50.0858385Z + BUILD_RENAMED_DIR=build_renamed 2025-08-14T21:35:50.0858596Z + BUILD_BIN_DIR=build/bin 2025-08-14T21:35:50.0858783Z + SHARD_NUMBER=1 2025-08-14T21:35:50.0858957Z + NUM_TEST_SHARDS=1 2025-08-14T21:35:50.0859147Z + export TORCH_SERIALIZATION_DEBUG=1 2025-08-14T21:35:50.0859374Z + TORCH_SERIALIZATION_DEBUG=1 2025-08-14T21:35:50.0859747Z + export VALGRIND=ON 2025-08-14T21:35:50.0860000Z + VALGRIND=ON 2025-08-14T21:35:50.0860268Z + [[ linux-jammy-py3.9-gcc11-build == *clang9* ]] 2025-08-14T21:35:50.0860560Z + [[ linux-jammy-py3.9-gcc11-build == *xpu* ]] 2025-08-14T21:35:50.0860813Z + [[ linux-jammy-py3.9-gcc11-build == *s390x* ]] 2025-08-14T21:35:50.0861040Z + [[ 0 == \1 ]] 2025-08-14T21:35:50.0861220Z + [[ True == \1 ]] 2025-08-14T21:35:50.0861423Z + [[ linux-jammy-py3.9-gcc11-build != *bazel* ]] 2025-08-14T21:35:50.0861669Z ++ realpath build/custom_test_artifacts 2025-08-14T21:35:50.0865681Z + CUSTOM_TEST_ARTIFACT_BUILD_DIR=/var/lib/jenkins/workspace/build/custom_test_artifacts 2025-08-14T21:35:50.0866104Z + [[ -n '' ]] 2025-08-14T21:35:50.0870485Z + echo 'Environment variables' 2025-08-14T21:35:50.0873471Z Environment variables 2025-08-14T21:35:50.0879006Z + env 2025-08-14T21:35:50.0896335Z GITHUB_WORKSPACE=/home/ec2-user/actions-runner/_work/pytorch/pytorch 2025-08-14T21:35:50.0898015Z CONTINUE_THROUGH_ERROR=True 2025-08-14T21:35:50.0898281Z BUILD_ENVIRONMENT=linux-jammy-py3.9-gcc11-build 2025-08-14T21:35:50.0898556Z HOSTNAME=a7aa204eccbc 2025-08-14T21:35:50.0898958Z GITHUB_PATH=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/add_path_5bcaec0a-cd0b-4277-ad1b-7179b99972eb 2025-08-14T21:35:50.0899357Z GITHUB_ACTION=__run_2 2025-08-14T21:35:50.0900090Z PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=0 2025-08-14T21:35:50.0900321Z GITHUB_RUN_NUMBER=66307 2025-08-14T21:35:50.0900539Z TEST_CONFIG=cpu_inductor_freezing_huggingface 2025-08-14T21:35:50.0900784Z GITHUB_REPOSITORY_OWNER_ID=21003710 2025-08-14T21:35:50.0901016Z TORCH_NVCC_FLAGS=-Xfatbin -compress-all 2025-08-14T21:35:50.0901349Z SCCACHE_IDLE_TIMEOUT=0 2025-08-14T21:35:50.0901790Z SCRIBE_GRAPHQL_ACCESS_TOKEN=*** 2025-08-14T21:35:50.0902016Z GITHUB_TRIGGERING_ACTOR=pytorchmergebot 2025-08-14T21:35:50.0902230Z GITHUB_REF_TYPE=branch 2025-08-14T21:35:50.0902419Z TORCH_CUDA_ARCH_LIST=Maxwell 2025-08-14T21:35:50.0902653Z BASE_SHA=1fc683cf17c8c673044538d10266c00f92987be2 2025-08-14T21:35:50.0902875Z XLA_CUDA= 2025-08-14T21:35:50.0903052Z NCCL_LIB_DIR=/usr/local/cuda/lib64/ 2025-08-14T21:35:50.0903335Z HUGGING_FACE_HUB_TOKEN=*** 2025-08-14T21:35:50.0903592Z *** 2025-08-14T21:35:50.0903750Z GITHUB_REPOSITORY_ID=65600975 2025-08-14T21:35:50.0903949Z GITHUB_ACTIONS=true 2025-08-14T21:35:50.0904169Z SCCACHE_ERROR_LOG=/var/lib/jenkins/sccache_error.log 2025-08-14T21:35:50.0904424Z SHA1=1fc683cf17c8c673044538d10266c00f92987be2 2025-08-14T21:35:50.0904676Z GITHUB_SHA=1fc683cf17c8c673044538d10266c00f92987be2 2025-08-14T21:35:50.0905047Z GITHUB_WORKFLOW_REF=pytorch/pytorch/.github/workflows/inductor-periodic.yml@refs/heads/main 2025-08-14T21:35:50.0905384Z UCC_HOME=/usr 2025-08-14T21:35:50.0905560Z TORCH_SERIALIZATION_DEBUG=1 2025-08-14T21:35:50.0905758Z VERBOSE_TEST_LOGS=False 2025-08-14T21:35:50.0905938Z GITHUB_REF=refs/heads/main 2025-08-14T21:35:50.0906126Z SHARD_NUMBER=1 2025-08-14T21:35:50.0906301Z GITHUB_REF_PROTECTED=true 2025-08-14T21:35:50.0906483Z HOME=/var/lib/jenkins 2025-08-14T21:35:50.0906695Z GITHUB_API_URL=https://api.github.com 2025-08-14T21:35:50.0906929Z PYTORCH_TEST_RERUN_DISABLED_TESTS=0 2025-08-14T21:35:50.0907132Z UCX_COMMIT= 2025-08-14T21:35:50.0907329Z USE_SYSTEM_NCCL=1 2025-08-14T21:35:50.0907502Z NUM_TEST_SHARDS=1 2025-08-14T21:35:50.0907670Z UCX_HOME=/usr 2025-08-14T21:35:50.0908034Z GITHUB_STATE=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/save_state_5bcaec0a-cd0b-4277-ad1b-7179b99972eb 2025-08-14T21:35:50.0908619Z JOB_NAME=linux-jammy-cpu-py3.9-gcc11-inductor / test (cpu_inductor_freezing_huggingface, 1, 1, linux.8xlarge.amx) 2025-08-14T21:35:50.0909196Z GITHUB_ENV=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/set_env_5bcaec0a-cd0b-4277-ad1b-7179b99972eb 2025-08-14T21:35:50.0909707Z GITHUB_EVENT_PATH=/home/ec2-user/actions-runner/_work/_temp/_github_workflow/event.json 2025-08-14T21:35:50.0910034Z GITHUB_EVENT_NAME=schedule 2025-08-14T21:35:50.0910220Z DASHBOARD_TAG= 2025-08-14T21:35:50.0910388Z GITHUB_RUN_ID=16976338999 2025-08-14T21:35:50.0910567Z INSTALLED_OPENBLAS= 2025-08-14T21:35:50.0910957Z GITHUB_STEP_SUMMARY=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/step_summary_5bcaec0a-cd0b-4277-ad1b-7179b99972eb 2025-08-14T21:35:50.0911411Z GITHUB_ACTOR=pytorchmergebot 2025-08-14T21:35:50.0911604Z PR_NUMBER= 2025-08-14T21:35:50.0911752Z DESIRED_CUDA= 2025-08-14T21:35:50.0911919Z GITHUB_RUN_ATTEMPT=1 2025-08-14T21:35:50.0912095Z VALGRIND=ON 2025-08-14T21:35:50.0912255Z ANACONDA_PYTHON_VERSION=3.9 2025-08-14T21:35:50.0912486Z GITHUB_GRAPHQL_URL=https://api.github.com/graphql 2025-08-14T21:35:50.0912724Z TERM=vt100 2025-08-14T21:35:50.0912878Z INSTALLED_VISION=yes 2025-08-14T21:35:50.0913053Z BRANCH=main 2025-08-14T21:35:50.0913220Z SCCACHE_REGION=us-east-1 2025-08-14T21:35:50.0913413Z OPENSSL_ROOT_DIR=/opt/openssl 2025-08-14T21:35:50.0913616Z CUDA_PATH=/usr/local/cuda 2025-08-14T21:35:50.0913957Z GITHUB_ACTION_PATH=/home/ec2-user/actions-runner/_work/pytorch/pytorch/./.github/actions/setup-linux 2025-08-14T21:35:50.0914328Z GITHUB_SERVER_URL=https://github.com 2025-08-14T21:35:50.0914532Z UCC_COMMIT= 2025-08-14T21:35:50.0914688Z REENABLED_ISSUES= 2025-08-14T21:35:50.0914853Z DOCS=yes 2025-08-14T21:35:50.0914997Z SHLVL=1 2025-08-14T21:35:50.0915252Z MAX_JOBS=30 2025-08-14T21:35:50.0915435Z GITHUB_ACTOR_ID=97764156 2025-08-14T21:35:50.0915745Z GITHUB_WORKFLOW_SHA=1fc683cf17c8c673044538d10266c00f92987be2 2025-08-14T21:35:50.0916008Z GITHUB_REF_NAME=main 2025-08-14T21:35:50.0916277Z XLA_CLANG_CACHE_S3_BUCKET_NAME=ossci-compiler-clang-cache-circleci-xla 2025-08-14T21:35:50.0916557Z GITHUB_JOB=test 2025-08-14T21:35:50.0916732Z NO_TEST_TIMEOUT=False 2025-08-14T21:35:50.0917004Z TD_DISTRIBUTED=False 2025-08-14T21:35:50.0917195Z GITHUB_REPOSITORY=pytorch/pytorch 2025-08-14T21:35:50.0917414Z GITHUB_RETENTION_DAYS=90 2025-08-14T21:35:50.0917610Z OPENSSL_DIR=/opt/openssl 2025-08-14T21:35:50.0917800Z GITHUB_ACTION_REPOSITORY= 2025-08-14T21:35:50.0918297Z PATH=/opt/cache/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/opt/conda/envs/py_3.9/bin:/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 2025-08-14T21:35:50.0918766Z GITHUB_BASE_REF= 2025-08-14T21:35:50.0918932Z INSTALLED_ACL= 2025-08-14T21:35:50.0919230Z ARTIFACTS_FILE_SUFFIX=test-cpu_inductor_freezing_huggingface-1-1-linux.8xlarge.amx_48128261046 2025-08-14T21:35:50.0919560Z CI=true 2025-08-14T21:35:50.0919729Z GITHUB_REPOSITORY_OWNER=pytorch 2025-08-14T21:35:50.0919989Z RUST_LOG=sccache::server=error 2025-08-14T21:35:50.0920183Z JOB_ID=48128261046 2025-08-14T21:35:50.0920345Z GITHUB_HEAD_REF= 2025-08-14T21:35:50.0920503Z GITHUB_ACTION_REF= 2025-08-14T21:35:50.0920716Z SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2 2025-08-14T21:35:50.0920949Z TEST_SHOWLOCALS=False 2025-08-14T21:35:50.0921131Z GITHUB_WORKFLOW=inductor-periodic 2025-08-14T21:35:50.0921341Z DEBIAN_FRONTEND=noninteractive 2025-08-14T21:35:50.0921724Z GITHUB_OUTPUT=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/set_output_5bcaec0a-cd0b-4277-ad1b-7179b99972eb 2025-08-14T21:35:50.0922104Z NO_TD=False 2025-08-14T21:35:50.0922270Z SKIP_SCCACHE_INITIALIZATION=1 2025-08-14T21:35:50.0922485Z NCCL_INCLUDE_DIR=/usr/local/cuda/include/ 2025-08-14T21:35:50.0922693Z _=/usr/bin/env 2025-08-14T21:35:50.0922853Z + echo 'Testing pytorch' 2025-08-14T21:35:50.0923033Z Testing pytorch 2025-08-14T21:35:50.0923230Z + export LANG=C.UTF-8 2025-08-14T21:35:50.0923398Z + LANG=C.UTF-8 2025-08-14T21:35:50.0923560Z + PR_NUMBER= 2025-08-14T21:35:50.0923768Z + [[ cpu_inductor_freezing_huggingface == \d\e\f\a\u\l\t ]] 2025-08-14T21:35:50.0924059Z + [[ cpu_inductor_freezing_huggingface == \d\i\s\t\r\i\b\u\t\e\d ]] 2025-08-14T21:35:50.0924345Z + [[ cpu_inductor_freezing_huggingface == \s\l\o\w ]] 2025-08-14T21:35:50.0924620Z + [[ linux-jammy-py3.9-gcc11-build == *slow-gradcheck* ]] 2025-08-14T21:35:50.0924882Z + [[ linux-jammy-py3.9-gcc11-build == *cuda* ]] 2025-08-14T21:35:50.0925130Z + [[ linux-jammy-py3.9-gcc11-build == *rocm* ]] 2025-08-14T21:35:50.0925371Z + [[ linux-jammy-py3.9-gcc11-build == *xpu* ]] 2025-08-14T21:35:50.0925630Z + [[ cpu_inductor_freezing_huggingface == *crossref* ]] 2025-08-14T21:35:50.0925875Z + [[ linux-jammy-py3.9-gcc11-build == *rocm* ]] 2025-08-14T21:35:50.0926117Z + [[ linux-jammy-py3.9-gcc11-build == *xpu* ]] 2025-08-14T21:35:50.0926366Z + [[ linux-jammy-py3.9-gcc11-build != *-bazel-* ]] 2025-08-14T21:35:50.0926595Z + pip_install ninja==1.10.2 2025-08-14T21:35:50.0926869Z + pip_install_pkg='python3 -m pip install --progress-bar off' 2025-08-14T21:35:50.0927173Z + python3 -m pip install --progress-bar off ninja==1.10.2 2025-08-14T21:35:50.4974620Z Collecting ninja==1.10.2 2025-08-14T21:35:50.5115540Z Downloading ninja-1.10.2-py2.py3-none-manylinux_2_5_x86_64.manylinux1_x86_64.whl.metadata (5.0 kB) 2025-08-14T21:35:50.5253044Z Downloading ninja-1.10.2-py2.py3-none-manylinux_2_5_x86_64.manylinux1_x86_64.whl (108 kB) 2025-08-14T21:35:51.2738767Z Installing collected packages: ninja 2025-08-14T21:35:51.2739857Z Attempting uninstall: ninja 2025-08-14T21:35:51.2747608Z Found existing installation: ninja 1.11.1.3 2025-08-14T21:35:51.2767737Z Uninstalling ninja-1.11.1.3: 2025-08-14T21:35:51.2817883Z Successfully uninstalled ninja-1.11.1.3 2025-08-14T21:35:51.3305930Z Successfully installed ninja-1.10.2 2025-08-14T21:35:51.4442730Z + export PATH=/var/lib/jenkins/.local/bin:/opt/cache/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/opt/conda/envs/py_3.9/bin:/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 2025-08-14T21:35:51.4443994Z + PATH=/var/lib/jenkins/.local/bin:/opt/cache/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/opt/conda/envs/py_3.9/bin:/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 2025-08-14T21:35:51.4447418Z + [[ linux-jammy-py3.9-gcc11-build == *aarch64* ]] 2025-08-14T21:35:51.4447856Z + [[ linux-jammy-py3.9-gcc11-build == *asan* ]] 2025-08-14T21:35:51.4454175Z + [[ linux-jammy-py3.9-gcc11-build == *-debug* ]] 2025-08-14T21:35:51.4455628Z + [[ linux-jammy-py3.9-gcc11-build != *-bazel-* ]] 2025-08-14T21:35:51.4456005Z + echo 'We are not in debug mode: linux-jammy-py3.9-gcc11-build. Expect the assertion to pass' 2025-08-14T21:35:51.4456418Z We are not in debug mode: linux-jammy-py3.9-gcc11-build. Expect the assertion to pass 2025-08-14T21:35:51.4456710Z + cd test 2025-08-14T21:35:51.4456956Z + python -c 'import torch; torch._C._crash_if_debug_asserts_fail(424242)' 2025-08-14T21:35:52.7611348Z + [[ cpu_inductor_freezing_huggingface == \n\o\g\p\u\_\N\O\_\A\V\X\2 ]] 2025-08-14T21:35:52.7611739Z + [[ cpu_inductor_freezing_huggingface == \n\o\g\p\u\_\A\V\X\5\1\2 ]] 2025-08-14T21:35:52.7612079Z + [[ cpu_inductor_freezing_huggingface == \l\e\g\a\c\y\_\n\v\i\d\i\a\_\d\r\i\v\e\r ]] 2025-08-14T21:35:52.7612391Z + DYNAMO_BENCHMARK_FLAGS=() 2025-08-14T21:35:52.7612639Z + [[ cpu_inductor_freezing_huggingface == *pr_time_benchmarks* ]] 2025-08-14T21:35:52.7612920Z + [[ cpu_inductor_freezing_huggingface == *dynamo_eager* ]] 2025-08-14T21:35:52.7613193Z + [[ cpu_inductor_freezing_huggingface == *aot_eager* ]] 2025-08-14T21:35:52.7613454Z + [[ cpu_inductor_freezing_huggingface == *aot_inductor* ]] 2025-08-14T21:35:52.7613738Z + [[ cpu_inductor_freezing_huggingface == *max_autotune_inductor* ]] 2025-08-14T21:35:52.7614025Z + [[ cpu_inductor_freezing_huggingface == *inductor* ]] 2025-08-14T21:35:52.7614278Z + [[ cpu_inductor_freezing_huggingface != *perf* ]] 2025-08-14T21:35:52.7614546Z + DYNAMO_BENCHMARK_FLAGS+=(--inductor) 2025-08-14T21:35:52.7614777Z + [[ cpu_inductor_freezing_huggingface == *dynamic* ]] 2025-08-14T21:35:52.7615028Z + [[ cpu_inductor_freezing_huggingface == *cpu* ]] 2025-08-14T21:35:52.7615265Z + DYNAMO_BENCHMARK_FLAGS+=(--device cpu) 2025-08-14T21:35:52.7878092Z + [[ linux-jammy-py3.9-gcc11-build == *libtorch* ]] 2025-08-14T21:35:52.7883426Z + [[ linux-jammy-py3.9-gcc11-build == *-bazel-* ]] 2025-08-14T21:35:52.7888044Z + cd test 2025-08-14T21:35:52.7892334Z + python -c 'import torch; print(torch.__config__.show())' 2025-08-14T21:35:53.7966094Z PyTorch built with: 2025-08-14T21:35:53.7970187Z - GCC 11.4 2025-08-14T21:35:53.7974854Z - C++ Version: 201703 2025-08-14T21:35:53.7979120Z - Intel(R) oneAPI Math Kernel Library Version 2024.2-Product Build 20240605 for Intel(R) 64 architecture applications 2025-08-14T21:35:53.7984657Z - Intel(R) MKL-DNN v3.7.1 (Git Hash 8d263e693366ef8db40acc569cc7d8edf644556d) 2025-08-14T21:35:53.7989668Z - OpenMP 201511 (a.k.a. OpenMP 4.5) 2025-08-14T21:35:53.7993824Z - LAPACK is enabled (usually provided by MKL) 2025-08-14T21:35:53.7998056Z - NNPACK is enabled 2025-08-14T21:35:53.7998299Z - CPU capability usage: AVX512 2025-08-14T21:35:53.8001421Z - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, COMMIT_SHA=1fc683cf17c8c673044538d10266c00f92987be2, CXX_COMPILER=/opt/cache/bin/c++, CXX_FLAGS= -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_FBGEMM -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -DC10_NODEPRECATED -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=range-loop-construct -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-unknown-pragmas -Wno-unused-parameter -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=old-style-cast -faligned-new -Werror -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, TORCH_VERSION=2.9.0, USE_CUDA=OFF, USE_CUDNN=OFF, USE_CUSPARSELT=OFF, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_GLOO=ON, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF, USE_XCCL=OFF, USE_XPU=OFF, 2025-08-14T21:35:53.8004728Z 2025-08-14T21:35:54.0617778Z + cd test 2025-08-14T21:35:54.0618342Z + python -c 'import torch; print(torch.__config__.parallel_info())' 2025-08-14T21:35:55.0424530Z ATen/Parallel: 2025-08-14T21:35:55.0427854Z at::get_num_threads() : 16 2025-08-14T21:35:55.0428158Z at::get_num_interop_threads() : 16 2025-08-14T21:35:55.0428401Z OpenMP 201511 (a.k.a. OpenMP 4.5) 2025-08-14T21:35:55.0428626Z omp_get_max_threads() : 16 2025-08-14T21:35:55.0429016Z Intel(R) oneAPI Math Kernel Library Version 2024.2-Product Build 20240605 for Intel(R) 64 architecture applications 2025-08-14T21:35:55.0429415Z mkl_get_max_threads() : 16 2025-08-14T21:35:55.0429735Z Intel(R) MKL-DNN v3.7.1 (Git Hash 8d263e693366ef8db40acc569cc7d8edf644556d) 2025-08-14T21:35:55.0430059Z std::thread::hardware_concurrency() : 32 2025-08-14T21:35:55.0430284Z Environment variables: 2025-08-14T21:35:55.0430479Z OMP_NUM_THREADS : [not set] 2025-08-14T21:35:55.0430689Z MKL_NUM_THREADS : [not set] 2025-08-14T21:35:55.0430887Z ATen parallel backend: OpenMP 2025-08-14T21:35:55.0431025Z 2025-08-14T21:35:55.3285306Z + [[ cpu_inductor_freezing_huggingface == *numpy_2* ]] 2025-08-14T21:35:55.3285914Z + [[ linux-jammy-py3.9-gcc11-build == *aarch64* ]] 2025-08-14T21:35:55.3286307Z + [[ cpu_inductor_freezing_huggingface == *backward* ]] 2025-08-14T21:35:55.3286593Z + [[ cpu_inductor_freezing_huggingface == *xla* ]] 2025-08-14T21:35:55.3287011Z + [[ cpu_inductor_freezing_huggingface == *executorch* ]] 2025-08-14T21:35:55.3287472Z + [[ cpu_inductor_freezing_huggingface == \j\i\t\_\l\e\g\a\c\y ]] 2025-08-14T21:35:55.3287797Z + [[ linux-jammy-py3.9-gcc11-build == *libtorch* ]] 2025-08-14T21:35:55.3288132Z + [[ cpu_inductor_freezing_huggingface == distributed ]] 2025-08-14T21:35:55.3288443Z + [[ cpu_inductor_freezing_huggingface == *operator_benchmark* ]] 2025-08-14T21:35:55.3288778Z + [[ cpu_inductor_freezing_huggingface == *inductor_distributed* ]] 2025-08-14T21:35:55.3289108Z + [[ cpu_inductor_freezing_huggingface == *inductor-halide* ]] 2025-08-14T21:35:55.3289430Z + [[ cpu_inductor_freezing_huggingface == *inductor-triton-cpu* ]] 2025-08-14T21:35:55.3289794Z + [[ cpu_inductor_freezing_huggingface == *inductor-micro-benchmark* ]] 2025-08-14T21:35:55.3290118Z + [[ cpu_inductor_freezing_huggingface == *huggingface* ]] 2025-08-14T21:35:55.3290361Z + install_torchvision 2025-08-14T21:35:55.3290549Z + local orig_preload 2025-08-14T21:35:55.3290734Z + local commit 2025-08-14T21:35:55.3290912Z ++ get_pinned_commit vision 2025-08-14T21:35:55.3291128Z ++ cat .github/ci_commit_pins/vision.txt 2025-08-14T21:35:55.3705029Z + commit=966da7e46f65d6d49df3e31214470a4fe5cc8e66 2025-08-14T21:35:55.3705444Z + orig_preload= 2025-08-14T21:35:55.3705631Z + '[' -n '' ']' 2025-08-14T21:35:55.3705844Z + [[ linux-jammy-py3.9-gcc11-build == *cuda* ]] 2025-08-14T21:35:55.3706339Z + pip_build_and_install git+https://github.com/pytorch/vision.git@966da7e46f65d6d49df3e31214470a4fe5cc8e66 dist/vision 2025-08-14T21:35:55.3706924Z + local build_target=git+https://github.com/pytorch/vision.git@966da7e46f65d6d49df3e31214470a4fe5cc8e66 2025-08-14T21:35:55.3707311Z + local wheel_dir=dist/vision 2025-08-14T21:35:55.3707517Z + local found_whl=0 2025-08-14T21:35:55.3707711Z + for file in "${wheel_dir}"/*.whl 2025-08-14T21:35:55.3708039Z + [[ -f dist/vision/torchvision-0.22.0a0+966da7e-cp39-cp39-linux_x86_64.whl ]] 2025-08-14T21:35:55.3708355Z + found_whl=1 2025-08-14T21:35:55.3708522Z + break 2025-08-14T21:35:55.3708671Z + '[' 1 == 0 ']' 2025-08-14T21:35:55.3708858Z + for file in "${wheel_dir}"/*.whl 2025-08-14T21:35:55.3709200Z + pip_install_whl dist/vision/torchvision-0.22.0a0+966da7e-cp39-cp39-linux_x86_64.whl 2025-08-14T21:35:55.3710284Z + args=('dist/vision/torchvision-0.22.0a0+966da7e-cp39-cp39-linux_x86_64.whl') 2025-08-14T21:35:55.3710620Z + local args 2025-08-14T21:35:55.3710910Z + [[ dist/vision/torchvision-0.22.0a0+966da7e-cp39-cp39-linux_x86_64.whl == *\ * ]] 2025-08-14T21:35:55.3711375Z + for path in "${args[@]}" 2025-08-14T21:35:55.3711709Z + echo 'Installing dist/vision/torchvision-0.22.0a0+966da7e-cp39-cp39-linux_x86_64.whl' 2025-08-14T21:35:55.3712159Z Installing dist/vision/torchvision-0.22.0a0+966da7e-cp39-cp39-linux_x86_64.whl 2025-08-14T21:35:55.3712671Z + python3 -mpip install --no-index --no-deps dist/vision/torchvision-0.22.0a0+966da7e-cp39-cp39-linux_x86_64.whl 2025-08-14T21:35:55.6745762Z Processing ./dist/vision/torchvision-0.22.0a0+966da7e-cp39-cp39-linux_x86_64.whl 2025-08-14T21:35:55.6825412Z Installing collected packages: torchvision 2025-08-14T21:35:56.1766949Z Successfully installed torchvision-0.22.0a0+966da7e 2025-08-14T21:35:56.2208546Z + '[' -n '' ']' 2025-08-14T21:35:56.2213586Z + id=0 2025-08-14T21:35:56.2215733Z + test_dynamo_benchmark huggingface 0 2025-08-14T21:35:56.2216448Z ++ pwd 2025-08-14T21:35:56.2216784Z + TEST_REPORTS_DIR=/var/lib/jenkins/workspace/test/test-reports 2025-08-14T21:35:56.2217078Z + local suite=huggingface 2025-08-14T21:35:56.2217348Z + shift 2025-08-14T21:35:56.2217520Z + local shard_id=0 2025-08-14T21:35:56.2217688Z + shift 2025-08-14T21:35:56.2217903Z + [[ cpu_inductor_freezing_huggingface == *perf_compare* ]] 2025-08-14T21:35:56.2218194Z + [[ cpu_inductor_freezing_huggingface == *perf* ]] 2025-08-14T21:35:56.2218457Z + [[ cpu_inductor_freezing_huggingface == *cpu* ]] 2025-08-14T21:35:56.2218694Z + local dt=float32 2025-08-14T21:35:56.2218901Z + [[ cpu_inductor_freezing_huggingface == *amp* ]] 2025-08-14T21:35:56.2219173Z + [[ cpu_inductor_freezing_huggingface == *freezing* ]] 2025-08-14T21:35:56.2219731Z + test_single_dynamo_benchmark inference huggingface 0 --inference --float32 --freezing 2025-08-14T21:35:56.2220083Z ++ pwd 2025-08-14T21:35:56.2220370Z + TEST_REPORTS_DIR=/var/lib/jenkins/workspace/test/test-reports 2025-08-14T21:35:56.2220700Z + mkdir -p /var/lib/jenkins/workspace/test/test-reports 2025-08-14T21:35:56.2236632Z + local name=inference 2025-08-14T21:35:56.2237146Z + shift 2025-08-14T21:35:56.2237576Z + local suite=huggingface 2025-08-14T21:35:56.2237772Z + shift 2025-08-14T21:35:56.2237918Z + local shard_id=0 2025-08-14T21:35:56.2238076Z + shift 2025-08-14T21:35:56.2238222Z + partition_flags=() 2025-08-14T21:35:56.2238391Z + local partition_flags 2025-08-14T21:35:56.2238564Z + [[ -n 1 ]] 2025-08-14T21:35:56.2238720Z + [[ -n 0 ]] 2025-08-14T21:35:56.2238992Z + partition_flags=(--total-partitions "$NUM_TEST_SHARDS" --partition-id "$shard_id") 2025-08-14T21:35:56.2239357Z + [[ cpu_inductor_freezing_huggingface == *perf_compare* ]] 2025-08-14T21:35:56.2239635Z + [[ cpu_inductor_freezing_huggingface == *perf* ]] 2025-08-14T21:35:56.2239891Z + [[ cpu_inductor_freezing_huggingface == *_avx2* ]] 2025-08-14T21:35:56.2240141Z + [[ cpu_inductor_freezing_huggingface == *_avx512* ]] 2025-08-14T21:35:56.2240919Z + python benchmarks/dynamo/huggingface.py --ci --accuracy --timing --explain --print-compilation-time --inductor --device cpu --inference --float32 --freezing --total-partitions 1 --partition-id 0 --output /var/lib/jenkins/workspace/test/test-reports/inference_huggingface.csv 2025-08-14T21:35:59.7097128Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:35:59.7098124Z from pkg_resources import resource_filename 2025-08-14T21:36:00.1903669Z 2025-08-14T21:36:00.1949292Z config.json: 0% 0.00/694 [00:00bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5317747Z 2025-08-14T21:38:25.5317880Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5318448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5319018Z layer_outputs = layer_module( 2025-08-14T21:38:25.5335973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5336378Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5336858Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5337322Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5337778Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5338270Z self_outputs = self.self( 2025-08-14T21:38:25.5338720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5339216Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5339965Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 790, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5340581Z key = self._chunk(key, window_overlap, getattr(self.config, "onnx_export", False)) 2025-08-14T21:38:25.5341136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 719, in _chunk 2025-08-14T21:38:25.5341602Z hidden_states = hidden_states.view( 2025-08-14T21:38:25.5341749Z 2025-08-14T21:38:25.5342149Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5342740Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5343296Z layer_outputs = layer_module( 2025-08-14T21:38:25.5343815Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5344232Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5344689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5345148Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5345718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5346169Z self_outputs = self.self( 2025-08-14T21:38:25.5346627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5347186Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5347777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5348415Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5348692Z 2025-08-14T21:38:25.5348812Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5349401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5349933Z layer_outputs = layer_module( 2025-08-14T21:38:25.5350309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5350714Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5351178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5351650Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5352109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5352565Z self_outputs = self.self( 2025-08-14T21:38:25.5352990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5353465Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5354012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5354641Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5354905Z 2025-08-14T21:38:25.5355024Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5355564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5356101Z layer_outputs = layer_module( 2025-08-14T21:38:25.5356516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5356917Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5357368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5357835Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5358280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5359132Z self_outputs = self.self( 2025-08-14T21:38:25.5359598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5360078Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5360622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5361242Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5361508Z 2025-08-14T21:38:25.5361648Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5361885Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5362118Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5362331Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5362582Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5363167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5363698Z layer_outputs = layer_module( 2025-08-14T21:38:25.5364076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5364467Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5364919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5365368Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5365833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5366270Z self_outputs = self.self( 2025-08-14T21:38:25.5366697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 536, in forward 2025-08-14T21:38:25.5367169Z diagonal_mask = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5367707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 834, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5368286Z self._mask_invalid_locations(diagonal_attention_scores, window_overlap) 2025-08-14T21:38:25.5368845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 762, in _mask_invalid_locations 2025-08-14T21:38:25.5369411Z input_tensor[:, :affected_seq_len, :, : affected_seq_len + 1] = torch.full_like( 2025-08-14T21:38:25.5369643Z 2025-08-14T21:38:25.5369730Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5369988Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5370525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5371057Z layer_outputs = layer_module( 2025-08-14T21:38:25.5371408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5371776Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5372185Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5372629Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5373058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5373465Z self_outputs = self.self( 2025-08-14T21:38:25.5373864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 541, in forward 2025-08-14T21:38:25.5374283Z attn_scores += diagonal_mask 2025-08-14T21:38:25.5374410Z 2025-08-14T21:38:25.5374524Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5375036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5375530Z layer_outputs = layer_module( 2025-08-14T21:38:25.5375882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5376249Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5376724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5377157Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5377581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5378033Z self_outputs = self.self( 2025-08-14T21:38:25.5378430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 579, in forward 2025-08-14T21:38:25.5378855Z attn_probs = nn.functional.softmax( 2025-08-14T21:38:25.5378992Z 2025-08-14T21:38:25.5379110Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5379727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5380255Z layer_outputs = layer_module( 2025-08-14T21:38:25.5380640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5381039Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5381468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5381892Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5382309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5382723Z self_outputs = self.self( 2025-08-14T21:38:25.5383112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5383575Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5384109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 863, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5384695Z padded_value = nn.functional.pad(value, (0, 0, window_overlap, window_overlap), value=-1) 2025-08-14T21:38:25.5385124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:38:25.5385489Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:38:25.5385645Z 2025-08-14T21:38:25.5385758Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5386257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5386733Z layer_outputs = layer_module( 2025-08-14T21:38:25.5387078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5387442Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5387848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5388260Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5388673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5389082Z self_outputs = self.self( 2025-08-14T21:38:25.5389470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5389924Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5390437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 876, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5390972Z chunked_attn_probs = self._pad_and_diagonalize(chunked_attn_probs) 2025-08-14T21:38:25.5391531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 699, in _pad_and_diagonalize 2025-08-14T21:38:25.5391987Z chunked_hidden_states = nn.functional.pad( 2025-08-14T21:38:25.5392362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:38:25.5392701Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:38:25.5392862Z 2025-08-14T21:38:25.5392967Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5393478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5393957Z layer_outputs = layer_module( 2025-08-14T21:38:25.5394297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5394662Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5395082Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5395500Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5395909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5396315Z self_outputs = self.self( 2025-08-14T21:38:25.5396713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5397168Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5397680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5398236Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:38:25.5398440Z 2025-08-14T21:38:25.5398552Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5399063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5399534Z layer_outputs = layer_module( 2025-08-14T21:38:25.5399881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5400239Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5400646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5401059Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5401472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5401885Z self_outputs = self.self( 2025-08-14T21:38:25.5402256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5402691Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5403197Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5403741Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:38:25.5403936Z 2025-08-14T21:38:25.5404035Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5404531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5404990Z layer_outputs = layer_module( 2025-08-14T21:38:25.5405361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5405702Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5406107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5406539Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5406930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5407328Z self_outputs = self.self( 2025-08-14T21:38:25.5407707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 618, in forward 2025-08-14T21:38:25.5408212Z attn_output = attn_output.transpose(0, 1).reshape(seq_len, batch_size, embed_dim).contiguous() 2025-08-14T21:38:25.5408439Z 2025-08-14T21:38:25.5408517Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5408727Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5408956Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5409437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5409906Z layer_outputs = layer_module( 2025-08-14T21:38:25.5410239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5410587Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5410983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1211, in forward 2025-08-14T21:38:25.5411390Z layer_output = apply_chunking_to_forward( 2025-08-14T21:38:25.5411776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:38:25.5412157Z return forward_fn(*input_tensors) 2025-08-14T21:38:25.5412549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1218, in ff_chunk 2025-08-14T21:38:25.5412991Z intermediate_output = self.intermediate(attn_output) 2025-08-14T21:38:25.5413415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1161, in forward 2025-08-14T21:38:25.5413849Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:38:25.5414206Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:38:25.5414533Z return self.act(input) 2025-08-14T21:38:25.5414640Z 2025-08-14T21:38:25.5414725Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5414921Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5415144Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5415640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5416100Z layer_outputs = layer_module( 2025-08-14T21:38:25.5416428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5416774Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5417175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5417567Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5417977Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5418380Z self_outputs = self.self( 2025-08-14T21:38:25.5418805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5419237Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5419836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5420527Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5420794Z 2025-08-14T21:38:25.5420889Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5421140Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5421673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5422151Z layer_outputs = layer_module( 2025-08-14T21:38:25.5422492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5422848Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5423251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5423667Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5424068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5424473Z self_outputs = self.self( 2025-08-14T21:38:25.5424864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5425303Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5425777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 790, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5426320Z key = self._chunk(key, window_overlap, getattr(self.config, "onnx_export", False)) 2025-08-14T21:38:25.5426808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 719, in _chunk 2025-08-14T21:38:25.5427224Z hidden_states = hidden_states.view( 2025-08-14T21:38:25.5427350Z 2025-08-14T21:38:25.5427452Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5427952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5428428Z layer_outputs = layer_module( 2025-08-14T21:38:25.5428762Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5429113Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5429525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5429939Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5430337Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5430750Z self_outputs = self.self( 2025-08-14T21:38:25.5431166Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5431614Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5432094Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5432670Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5432913Z 2025-08-14T21:38:25.5433074Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5433599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5434097Z layer_outputs = layer_module( 2025-08-14T21:38:25.5434496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5434883Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5435294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5435726Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5436156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5436582Z self_outputs = self.self( 2025-08-14T21:38:25.5436987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5437442Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5437959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5438561Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5438808Z 2025-08-14T21:38:25.5438918Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5439458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5440001Z layer_outputs = layer_module( 2025-08-14T21:38:25.5440363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5440732Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5441163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5441591Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5442211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5442640Z self_outputs = self.self( 2025-08-14T21:38:25.5443066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5443534Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5444067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5444703Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5444962Z 2025-08-14T21:38:25.5445043Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5445263Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5445505Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5446032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5446526Z layer_outputs = layer_module( 2025-08-14T21:38:25.5446899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5447286Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5447707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5448239Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5448656Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5449078Z self_outputs = self.self( 2025-08-14T21:38:25.5449537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 541, in forward 2025-08-14T21:38:25.5449951Z attn_scores += diagonal_mask 2025-08-14T21:38:25.5450075Z 2025-08-14T21:38:25.5450182Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5450696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5451182Z layer_outputs = layer_module( 2025-08-14T21:38:25.5451523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5451888Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5452292Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5452706Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5453101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5453501Z self_outputs = self.self( 2025-08-14T21:38:25.5453887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 579, in forward 2025-08-14T21:38:25.5454296Z attn_probs = nn.functional.softmax( 2025-08-14T21:38:25.5454426Z 2025-08-14T21:38:25.5454505Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5454737Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5455244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5455712Z layer_outputs = layer_module( 2025-08-14T21:38:25.5456055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5456412Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5456820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5457218Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5457620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5458021Z self_outputs = self.self( 2025-08-14T21:38:25.5458408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5458853Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5459367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 863, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5460046Z padded_value = nn.functional.pad(value, (0, 0, window_overlap, window_overlap), value=-1) 2025-08-14T21:38:25.5460505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:38:25.5460877Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:38:25.5461054Z 2025-08-14T21:38:25.5461156Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5461675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5462155Z layer_outputs = layer_module( 2025-08-14T21:38:25.5463427Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5463810Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5464231Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5464737Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5465144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5465547Z self_outputs = self.self( 2025-08-14T21:38:25.5465933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5466369Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5466884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 876, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5467416Z chunked_attn_probs = self._pad_and_diagonalize(chunked_attn_probs) 2025-08-14T21:38:25.5467906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 699, in _pad_and_diagonalize 2025-08-14T21:38:25.5468366Z chunked_hidden_states = nn.functional.pad( 2025-08-14T21:38:25.5468700Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:38:25.5469043Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:38:25.5469192Z 2025-08-14T21:38:25.5469293Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5469790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5470263Z layer_outputs = layer_module( 2025-08-14T21:38:25.5470597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5470937Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5471331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5471730Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5472125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5472508Z self_outputs = self.self( 2025-08-14T21:38:25.5472884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5473314Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5473808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5474341Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:38:25.5474541Z 2025-08-14T21:38:25.5474638Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5475123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5475571Z layer_outputs = layer_module( 2025-08-14T21:38:25.5475910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5476254Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5476652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5477042Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5477485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5477878Z self_outputs = self.self( 2025-08-14T21:38:25.5478260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5478727Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5479226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5479757Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:38:25.5479952Z 2025-08-14T21:38:25.5480060Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5480544Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5481007Z layer_outputs = layer_module( 2025-08-14T21:38:25.5481336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5481679Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5482079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5482465Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5482848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5483222Z self_outputs = self.self( 2025-08-14T21:38:25.5483594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 618, in forward 2025-08-14T21:38:25.5484090Z attn_output = attn_output.transpose(0, 1).reshape(seq_len, batch_size, embed_dim).contiguous() 2025-08-14T21:38:25.5484313Z 2025-08-14T21:38:25.5484397Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5484591Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5484822Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5485311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5485756Z layer_outputs = layer_module( 2025-08-14T21:38:25.5486083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5486425Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5486817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1211, in forward 2025-08-14T21:38:25.5487217Z layer_output = apply_chunking_to_forward( 2025-08-14T21:38:25.5487598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:38:25.5487989Z return forward_fn(*input_tensors) 2025-08-14T21:38:25.5488393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1218, in ff_chunk 2025-08-14T21:38:25.5488829Z intermediate_output = self.intermediate(attn_output) 2025-08-14T21:38:25.5489251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1161, in forward 2025-08-14T21:38:25.5489700Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:38:25.5490061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:38:25.5490385Z return self.act(input) 2025-08-14T21:38:25.5490490Z 2025-08-14T21:38:25.5490609Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5490810Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5491030Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5491503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5492012Z layer_outputs = layer_module( 2025-08-14T21:38:25.5492344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5492687Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5493076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5493481Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5493889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5494290Z self_outputs = self.self( 2025-08-14T21:38:25.5494670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5495105Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5495586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5496161Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5496390Z 2025-08-14T21:38:25.5496466Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5496694Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5497202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5497661Z layer_outputs = layer_module( 2025-08-14T21:38:25.5498004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5498363Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5498784Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5499211Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5499721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5500151Z self_outputs = self.self( 2025-08-14T21:38:25.5500553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5500990Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5501507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 790, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5502048Z key = self._chunk(key, window_overlap, getattr(self.config, "onnx_export", False)) 2025-08-14T21:38:25.5502533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 719, in _chunk 2025-08-14T21:38:25.5502941Z hidden_states = hidden_states.view( 2025-08-14T21:38:25.5503082Z 2025-08-14T21:38:25.5503186Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5503690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5504156Z layer_outputs = layer_module( 2025-08-14T21:38:25.5504502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5504905Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5505326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5505754Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5506178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5506581Z self_outputs = self.self( 2025-08-14T21:38:25.5506970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5507405Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5507880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5508450Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5508685Z 2025-08-14T21:38:25.5508792Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5509386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5509868Z layer_outputs = layer_module( 2025-08-14T21:38:25.5510209Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5510564Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5511002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5511425Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5511850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5512245Z self_outputs = self.self( 2025-08-14T21:38:25.5512638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5513075Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5513565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5514127Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5514369Z 2025-08-14T21:38:25.5514471Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5514976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5515450Z layer_outputs = layer_module( 2025-08-14T21:38:25.5515787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5516144Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5516553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5516953Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5517357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5517762Z self_outputs = self.self( 2025-08-14T21:38:25.5518155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5518582Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5519119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5519690Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5519966Z 2025-08-14T21:38:25.5520052Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5520257Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5520495Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5520996Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5521463Z layer_outputs = layer_module( 2025-08-14T21:38:25.5521806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5522160Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5522573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5522975Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5523378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5523782Z self_outputs = self.self( 2025-08-14T21:38:25.5524170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 541, in forward 2025-08-14T21:38:25.5524565Z attn_scores += diagonal_mask 2025-08-14T21:38:25.5524692Z 2025-08-14T21:38:25.5524794Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5525293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5525757Z layer_outputs = layer_module( 2025-08-14T21:38:25.5526103Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5526460Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5526868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5527267Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5527669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5528069Z self_outputs = self.self( 2025-08-14T21:38:25.5528455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 579, in forward 2025-08-14T21:38:25.5528859Z attn_probs = nn.functional.softmax( 2025-08-14T21:38:25.5528996Z 2025-08-14T21:38:25.5529073Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5529315Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5529794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5530256Z layer_outputs = layer_module( 2025-08-14T21:38:25.5530586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5530933Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5531325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5531719Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5532110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5532500Z self_outputs = self.self( 2025-08-14T21:38:25.5532920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5533360Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5533892Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 863, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5534444Z padded_value = nn.functional.pad(value, (0, 0, window_overlap, window_overlap), value=-1) 2025-08-14T21:38:25.5534852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:38:25.5535188Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:38:25.5535333Z 2025-08-14T21:38:25.5535438Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5535936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5536407Z layer_outputs = layer_module( 2025-08-14T21:38:25.5536747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5537103Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5537500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5537910Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5538316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5538708Z self_outputs = self.self( 2025-08-14T21:38:25.5539096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5539614Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5540140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 876, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5540668Z chunked_attn_probs = self._pad_and_diagonalize(chunked_attn_probs) 2025-08-14T21:38:25.5541205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 699, in _pad_and_diagonalize 2025-08-14T21:38:25.5541734Z chunked_hidden_states = nn.functional.pad( 2025-08-14T21:38:25.5542266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:38:25.5542614Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:38:25.5542781Z 2025-08-14T21:38:25.5542887Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5543412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5543904Z layer_outputs = layer_module( 2025-08-14T21:38:25.5544244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5544608Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5545025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5545434Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5545842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5546251Z self_outputs = self.self( 2025-08-14T21:38:25.5546630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5547147Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5547663Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5548260Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:38:25.5548459Z 2025-08-14T21:38:25.5548571Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5549071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5549546Z layer_outputs = layer_module( 2025-08-14T21:38:25.5549891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5550253Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5550662Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5551073Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5551476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5551870Z self_outputs = self.self( 2025-08-14T21:38:25.5552258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5552700Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5553210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5553747Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:38:25.5553952Z 2025-08-14T21:38:25.5554055Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5554552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5555022Z layer_outputs = layer_module( 2025-08-14T21:38:25.5555353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5555705Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5556111Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5556497Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5556872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5557248Z self_outputs = self.self( 2025-08-14T21:38:25.5557618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 618, in forward 2025-08-14T21:38:25.5558092Z attn_output = attn_output.transpose(0, 1).reshape(seq_len, batch_size, embed_dim).contiguous() 2025-08-14T21:38:25.5558322Z 2025-08-14T21:38:25.5558399Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5558598Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5558816Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5559286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5559734Z layer_outputs = layer_module( 2025-08-14T21:38:25.5560060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5560394Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5560813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1211, in forward 2025-08-14T21:38:25.5561218Z layer_output = apply_chunking_to_forward( 2025-08-14T21:38:25.5561597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:38:25.5561992Z return forward_fn(*input_tensors) 2025-08-14T21:38:25.5562392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1218, in ff_chunk 2025-08-14T21:38:25.5562825Z intermediate_output = self.intermediate(attn_output) 2025-08-14T21:38:25.5563249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1161, in forward 2025-08-14T21:38:25.5563671Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:38:25.5564037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:38:25.5564384Z return self.act(input) 2025-08-14T21:38:25.5564489Z 2025-08-14T21:38:25.5564570Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5564760Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5564982Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5565458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5565896Z layer_outputs = layer_module( 2025-08-14T21:38:25.5566218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5566557Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5566950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5567345Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5567740Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5568131Z self_outputs = self.self( 2025-08-14T21:38:25.5568508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5568931Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5569415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5569954Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5570176Z 2025-08-14T21:38:25.5570251Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5570475Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5570948Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5571392Z layer_outputs = layer_module( 2025-08-14T21:38:25.5571708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5572069Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5572468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5572868Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5573251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5573641Z self_outputs = self.self( 2025-08-14T21:38:25.5574079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5574496Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5574974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 790, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5575526Z key = self._chunk(key, window_overlap, getattr(self.config, "onnx_export", False)) 2025-08-14T21:38:25.5576002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 719, in _chunk 2025-08-14T21:38:25.5576391Z hidden_states = hidden_states.view( 2025-08-14T21:38:25.5576528Z 2025-08-14T21:38:25.5576630Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5577120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5577585Z layer_outputs = layer_module( 2025-08-14T21:38:25.5577911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5578260Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5578663Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5579069Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5579471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5579989Z self_outputs = self.self( 2025-08-14T21:38:25.5580420Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5580887Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5581399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5581957Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5582192Z 2025-08-14T21:38:25.5582303Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5582792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5583257Z layer_outputs = layer_module( 2025-08-14T21:38:25.5583597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5583952Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5584344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5584746Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5585141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5585538Z self_outputs = self.self( 2025-08-14T21:38:25.5585908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5586325Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5586799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5587341Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5587576Z 2025-08-14T21:38:25.5587675Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5588222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5588688Z layer_outputs = layer_module( 2025-08-14T21:38:25.5589024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5589402Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5589797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5590194Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5590581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5590969Z self_outputs = self.self( 2025-08-14T21:38:25.5591347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5591765Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5592225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5592770Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5592993Z 2025-08-14T21:38:25.5593077Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5593276Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5593502Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5593987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5594448Z layer_outputs = layer_module( 2025-08-14T21:38:25.5594775Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5595126Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5595520Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5595922Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5596316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5596712Z self_outputs = self.self( 2025-08-14T21:38:25.5597091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 541, in forward 2025-08-14T21:38:25.5597476Z attn_scores += diagonal_mask 2025-08-14T21:38:25.5597600Z 2025-08-14T21:38:25.5597698Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5598182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5598643Z layer_outputs = layer_module( 2025-08-14T21:38:25.5598971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5599319Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5599716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5600116Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5600503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5600895Z self_outputs = self.self( 2025-08-14T21:38:25.5601276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 579, in forward 2025-08-14T21:38:25.5601707Z attn_probs = nn.functional.softmax( 2025-08-14T21:38:25.5601845Z 2025-08-14T21:38:25.5601922Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5602147Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5602664Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5603116Z layer_outputs = layer_module( 2025-08-14T21:38:25.5603447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5603797Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5604195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5604663Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5605058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5605453Z self_outputs = self.self( 2025-08-14T21:38:25.5605821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5606258Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5606795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 863, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5607379Z padded_value = nn.functional.pad(value, (0, 0, window_overlap, window_overlap), value=-1) 2025-08-14T21:38:25.5607784Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:38:25.5608129Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:38:25.5608278Z 2025-08-14T21:38:25.5608389Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5608914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5609399Z layer_outputs = layer_module( 2025-08-14T21:38:25.5609756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5610127Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5610542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5610951Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5611349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5611740Z self_outputs = self.self( 2025-08-14T21:38:25.5612113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5612545Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5613042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 876, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5613558Z chunked_attn_probs = self._pad_and_diagonalize(chunked_attn_probs) 2025-08-14T21:38:25.5614042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 699, in _pad_and_diagonalize 2025-08-14T21:38:25.5614502Z chunked_hidden_states = nn.functional.pad( 2025-08-14T21:38:25.5614833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:38:25.5615175Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:38:25.5615325Z 2025-08-14T21:38:25.5615535Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5616033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5616560Z layer_outputs = layer_module( 2025-08-14T21:38:25.5616892Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5617248Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5617662Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5618082Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5618495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5618914Z self_outputs = self.self( 2025-08-14T21:38:25.5619323Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5619910Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5620468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5621064Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:38:25.5621292Z 2025-08-14T21:38:25.5621403Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5621891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5622341Z layer_outputs = layer_module( 2025-08-14T21:38:25.5622680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5623029Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5623420Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5623823Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5624220Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5624612Z self_outputs = self.self( 2025-08-14T21:38:25.5624979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5625413Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5625912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5626444Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:38:25.5626636Z 2025-08-14T21:38:25.5626737Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5627222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5627681Z layer_outputs = layer_module( 2025-08-14T21:38:25.5628009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5628351Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5628748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5629145Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5629568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5629961Z self_outputs = self.self( 2025-08-14T21:38:25.5630340Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 618, in forward 2025-08-14T21:38:25.5630885Z attn_output = attn_output.transpose(0, 1).reshape(seq_len, batch_size, embed_dim).contiguous() 2025-08-14T21:38:25.5631113Z 2025-08-14T21:38:25.5631190Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5631395Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5631622Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5632106Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5632559Z layer_outputs = layer_module( 2025-08-14T21:38:25.5632895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5633244Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5633632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1211, in forward 2025-08-14T21:38:25.5634046Z layer_output = apply_chunking_to_forward( 2025-08-14T21:38:25.5634434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:38:25.5634819Z return forward_fn(*input_tensors) 2025-08-14T21:38:25.5635211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1218, in ff_chunk 2025-08-14T21:38:25.5635653Z intermediate_output = self.intermediate(attn_output) 2025-08-14T21:38:25.5636084Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1161, in forward 2025-08-14T21:38:25.5636521Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:38:25.5636883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:38:25.5637215Z return self.act(input) 2025-08-14T21:38:25.5637326Z 2025-08-14T21:38:25.5637415Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5637611Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5637843Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5638330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5638800Z layer_outputs = layer_module( 2025-08-14T21:38:25.5639127Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5639475Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5639881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5640283Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5640699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5641103Z self_outputs = self.self( 2025-08-14T21:38:25.5641508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5642112Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5642610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5643195Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5643521Z 2025-08-14T21:38:25.5643613Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5643860Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5644407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5644983Z layer_outputs = layer_module( 2025-08-14T21:38:25.5645362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5645730Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5646150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5646569Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5646981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5647396Z self_outputs = self.self( 2025-08-14T21:38:25.5647789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5648246Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5648736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 790, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5649282Z key = self._chunk(key, window_overlap, getattr(self.config, "onnx_export", False)) 2025-08-14T21:38:25.5649772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 719, in _chunk 2025-08-14T21:38:25.5650184Z hidden_states = hidden_states.view( 2025-08-14T21:38:25.5650317Z 2025-08-14T21:38:25.5650419Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5650935Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5651403Z layer_outputs = layer_module( 2025-08-14T21:38:25.5651740Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5652090Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5652498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5652903Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5653294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5653696Z self_outputs = self.self( 2025-08-14T21:38:25.5654083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5654509Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5654979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5655548Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5655788Z 2025-08-14T21:38:25.5655891Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5656386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5656844Z layer_outputs = layer_module( 2025-08-14T21:38:25.5657181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5657540Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5657974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5658374Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5658812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5659218Z self_outputs = self.self( 2025-08-14T21:38:25.5659702Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5660183Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5660714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5661339Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5661600Z 2025-08-14T21:38:25.5661710Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5662258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5662777Z layer_outputs = layer_module( 2025-08-14T21:38:25.5663149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5663537Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5663989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5664445Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5664889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5665339Z self_outputs = self.self( 2025-08-14T21:38:25.5665777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5666255Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5666793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5667388Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5667626Z 2025-08-14T21:38:25.5667704Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5667910Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5668133Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5668624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5669084Z layer_outputs = layer_module( 2025-08-14T21:38:25.5669417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5669760Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5670156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5670558Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5670949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5671339Z self_outputs = self.self( 2025-08-14T21:38:25.5671716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 541, in forward 2025-08-14T21:38:25.5672115Z attn_scores += diagonal_mask 2025-08-14T21:38:25.5672232Z 2025-08-14T21:38:25.5672365Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5672853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5673348Z layer_outputs = layer_module( 2025-08-14T21:38:25.5673682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5674023Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5674423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5674827Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5675216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5675609Z self_outputs = self.self( 2025-08-14T21:38:25.5675993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 579, in forward 2025-08-14T21:38:25.5676395Z attn_probs = nn.functional.softmax( 2025-08-14T21:38:25.5676525Z 2025-08-14T21:38:25.5676601Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5676830Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5677318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5677784Z layer_outputs = layer_module( 2025-08-14T21:38:25.5678109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5678453Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5678851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5679240Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5679638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5680031Z self_outputs = self.self( 2025-08-14T21:38:25.5680410Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5680837Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5681336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 863, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5681891Z padded_value = nn.functional.pad(value, (0, 0, window_overlap, window_overlap), value=-1) 2025-08-14T21:38:25.5682297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:38:25.5682622Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:38:25.5682776Z 2025-08-14T21:38:25.5682874Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5683365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5683837Z layer_outputs = layer_module( 2025-08-14T21:38:25.5684160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5684508Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5684904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5685293Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5685724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5686114Z self_outputs = self.self( 2025-08-14T21:38:25.5686489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5686956Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5687454Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 876, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5687967Z chunked_attn_probs = self._pad_and_diagonalize(chunked_attn_probs) 2025-08-14T21:38:25.5688450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 699, in _pad_and_diagonalize 2025-08-14T21:38:25.5688887Z chunked_hidden_states = nn.functional.pad( 2025-08-14T21:38:25.5689215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:38:25.5689547Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:38:25.5689690Z 2025-08-14T21:38:25.5689790Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5690277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5690734Z layer_outputs = layer_module( 2025-08-14T21:38:25.5691067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5691402Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5691799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5692196Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5692592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5692977Z self_outputs = self.self( 2025-08-14T21:38:25.5693355Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5693793Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5694283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5694819Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:38:25.5695019Z 2025-08-14T21:38:25.5695120Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5695610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5696062Z layer_outputs = layer_module( 2025-08-14T21:38:25.5696394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5696744Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5697139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5697530Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5697926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5698326Z self_outputs = self.self( 2025-08-14T21:38:25.5698711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5699146Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5699787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5700344Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:38:25.5700577Z 2025-08-14T21:38:25.5700689Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5701181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5701652Z layer_outputs = layer_module( 2025-08-14T21:38:25.5701993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5702346Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5702746Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5703146Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5703539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5704012Z self_outputs = self.self( 2025-08-14T21:38:25.5704399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 618, in forward 2025-08-14T21:38:25.5704910Z attn_output = attn_output.transpose(0, 1).reshape(seq_len, batch_size, embed_dim).contiguous() 2025-08-14T21:38:25.5705173Z 2025-08-14T21:38:25.5705263Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5705462Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5705691Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5706189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5706643Z layer_outputs = layer_module( 2025-08-14T21:38:25.5706981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5707337Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5707733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1211, in forward 2025-08-14T21:38:25.5708137Z layer_output = apply_chunking_to_forward( 2025-08-14T21:38:25.5708526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:38:25.5708905Z return forward_fn(*input_tensors) 2025-08-14T21:38:25.5709305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1218, in ff_chunk 2025-08-14T21:38:25.5709736Z intermediate_output = self.intermediate(attn_output) 2025-08-14T21:38:25.5710169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1161, in forward 2025-08-14T21:38:25.5710603Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:38:25.5710964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:38:25.5711299Z return self.act(input) 2025-08-14T21:38:25.5711413Z 2025-08-14T21:38:25.5711491Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5711698Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5711918Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5712408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5712870Z layer_outputs = layer_module( 2025-08-14T21:38:25.5713233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5713585Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5713989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5714429Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5714821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5715234Z self_outputs = self.self( 2025-08-14T21:38:25.5715604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5716010Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5716468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5717012Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5717236Z 2025-08-14T21:38:25.5717320Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5717548Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5718018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5718465Z layer_outputs = layer_module( 2025-08-14T21:38:25.5718790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5719118Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5719511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5719900Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5720293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5720679Z self_outputs = self.self( 2025-08-14T21:38:25.5721062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5721485Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5721960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 790, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5722479Z key = self._chunk(key, window_overlap, getattr(self.config, "onnx_export", False)) 2025-08-14T21:38:25.5722947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 719, in _chunk 2025-08-14T21:38:25.5723344Z hidden_states = hidden_states.view( 2025-08-14T21:38:25.5723470Z 2025-08-14T21:38:25.5723581Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5724063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5724531Z layer_outputs = layer_module( 2025-08-14T21:38:25.5724866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5725210Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5725613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5726016Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5726417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5726843Z self_outputs = self.self( 2025-08-14T21:38:25.5727225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5727647Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5728150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5728698Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5728938Z 2025-08-14T21:38:25.5729042Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5729538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5730006Z layer_outputs = layer_module( 2025-08-14T21:38:25.5730341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5730695Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5731100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5731505Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5731911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5732309Z self_outputs = self.self( 2025-08-14T21:38:25.5732697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5733123Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5733607Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5734170Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5734424Z 2025-08-14T21:38:25.5734536Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5735047Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5735524Z layer_outputs = layer_module( 2025-08-14T21:38:25.5735871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5736239Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5736649Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5737064Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5737480Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5737878Z self_outputs = self.self( 2025-08-14T21:38:25.5738277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5738717Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5739205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5739879Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5740147Z 2025-08-14T21:38:25.5740234Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5740468Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5740722Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5741169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5741246Z layer_outputs = layer_module( 2025-08-14T21:38:25.5741525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5741607Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5742061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5742152Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5742436Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5742520Z self_outputs = self.self( 2025-08-14T21:38:25.5742822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 541, in forward 2025-08-14T21:38:25.5742900Z attn_scores += diagonal_mask 2025-08-14T21:38:25.5742904Z 2025-08-14T21:38:25.5743027Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5743404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5743493Z layer_outputs = layer_module( 2025-08-14T21:38:25.5743736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5743813Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5744096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5744171Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5744450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5744525Z self_outputs = self.self( 2025-08-14T21:38:25.5744787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 579, in forward 2025-08-14T21:38:25.5744874Z attn_probs = nn.functional.softmax( 2025-08-14T21:38:25.5744877Z 2025-08-14T21:38:25.5744954Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5745052Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5745399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5745467Z layer_outputs = layer_module( 2025-08-14T21:38:25.5745686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5745764Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5746031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5746110Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5746379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5746445Z self_outputs = self.self( 2025-08-14T21:38:25.5746716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5746829Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5747174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 863, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5747415Z padded_value = nn.functional.pad(value, (0, 0, window_overlap, window_overlap), value=-1) 2025-08-14T21:38:25.5747606Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:38:25.5747712Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:38:25.5747760Z 2025-08-14T21:38:25.5747859Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5748200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5748270Z layer_outputs = layer_module( 2025-08-14T21:38:25.5748480Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5748566Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5748832Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5748915Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5749179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5749249Z self_outputs = self.self( 2025-08-14T21:38:25.5749523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5749633Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5749967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 876, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5750102Z chunked_attn_probs = self._pad_and_diagonalize(chunked_attn_probs) 2025-08-14T21:38:25.5750401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 699, in _pad_and_diagonalize 2025-08-14T21:38:25.5750500Z chunked_hidden_states = nn.functional.pad( 2025-08-14T21:38:25.5750682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:38:25.5750775Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:38:25.5750781Z 2025-08-14T21:38:25.5750892Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5751233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5751311Z layer_outputs = layer_module( 2025-08-14T21:38:25.5751537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5751614Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5751887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5751959Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5752234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5752304Z self_outputs = self.self( 2025-08-14T21:38:25.5752569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5752686Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5753021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5753164Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:38:25.5753176Z 2025-08-14T21:38:25.5753275Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5753639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5753718Z layer_outputs = layer_module( 2025-08-14T21:38:25.5753958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5754034Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5754306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5754378Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5754652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5754720Z self_outputs = self.self( 2025-08-14T21:38:25.5754993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5755113Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5755465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5755618Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:38:25.5755621Z 2025-08-14T21:38:25.5755721Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5756052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5756131Z layer_outputs = layer_module( 2025-08-14T21:38:25.5756339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5756425Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5756694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5756769Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5757049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5757117Z self_outputs = self.self( 2025-08-14T21:38:25.5757386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 618, in forward 2025-08-14T21:38:25.5757577Z attn_output = attn_output.transpose(0, 1).reshape(seq_len, batch_size, embed_dim).contiguous() 2025-08-14T21:38:25.5757580Z 2025-08-14T21:38:25.5757661Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5757747Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5757849Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5758191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5758270Z layer_outputs = layer_module( 2025-08-14T21:38:25.5758505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5758586Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5758852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1211, in forward 2025-08-14T21:38:25.5758934Z layer_output = apply_chunking_to_forward( 2025-08-14T21:38:25.5759189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:38:25.5759262Z return forward_fn(*input_tensors) 2025-08-14T21:38:25.5759566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1218, in ff_chunk 2025-08-14T21:38:25.5759681Z intermediate_output = self.intermediate(attn_output) 2025-08-14T21:38:25.5759956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1161, in forward 2025-08-14T21:38:25.5760140Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:38:25.5760348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:38:25.5760417Z return self.act(input) 2025-08-14T21:38:25.5760420Z 2025-08-14T21:38:25.5760504Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5760579Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5760688Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5761029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5761098Z layer_outputs = layer_module( 2025-08-14T21:38:25.5761317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5761397Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5761668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5761750Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5762019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5762094Z self_outputs = self.self( 2025-08-14T21:38:25.5762364Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5762466Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5762802Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5762980Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5762986Z 2025-08-14T21:38:25.5763069Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5763170Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5763512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5763590Z layer_outputs = layer_module( 2025-08-14T21:38:25.5763803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5763879Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5764161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5764234Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5764512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5764583Z self_outputs = self.self( 2025-08-14T21:38:25.5764860Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5764967Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5765295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 790, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5765453Z key = self._chunk(key, window_overlap, getattr(self.config, "onnx_export", False)) 2025-08-14T21:38:25.5765767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 719, in _chunk 2025-08-14T21:38:25.5765848Z hidden_states = hidden_states.view( 2025-08-14T21:38:25.5765851Z 2025-08-14T21:38:25.5765963Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5766337Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5766421Z layer_outputs = layer_module( 2025-08-14T21:38:25.5766635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5766715Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5766996Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5767071Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5767345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5767423Z self_outputs = self.self( 2025-08-14T21:38:25.5767695Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5767804Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5768130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5768306Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5768310Z 2025-08-14T21:38:25.5768421Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5768764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5768843Z layer_outputs = layer_module( 2025-08-14T21:38:25.5769056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5769137Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5769415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5769488Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5769766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5769834Z self_outputs = self.self( 2025-08-14T21:38:25.5770103Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5770210Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5770539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5770717Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5770729Z 2025-08-14T21:38:25.5770832Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5771173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5771251Z layer_outputs = layer_module( 2025-08-14T21:38:25.5771462Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5771539Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5771854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5771928Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5772206Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5772303Z self_outputs = self.self( 2025-08-14T21:38:25.5772576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5772680Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5773010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5773192Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5773195Z 2025-08-14T21:38:25.5773276Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5773352Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5773461Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5773802Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5773884Z layer_outputs = layer_module( 2025-08-14T21:38:25.5774095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5774172Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5774452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5774525Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5774801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5774877Z self_outputs = self.self( 2025-08-14T21:38:25.5775158Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 541, in forward 2025-08-14T21:38:25.5775240Z attn_scores += diagonal_mask 2025-08-14T21:38:25.5775243Z 2025-08-14T21:38:25.5775344Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5775680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5775759Z layer_outputs = layer_module( 2025-08-14T21:38:25.5775969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5776056Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5776330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5776402Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5776679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5776750Z self_outputs = self.self( 2025-08-14T21:38:25.5777018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 579, in forward 2025-08-14T21:38:25.5777105Z attn_probs = nn.functional.softmax( 2025-08-14T21:38:25.5777109Z 2025-08-14T21:38:25.5777185Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5777293Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5777631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5777699Z layer_outputs = layer_module( 2025-08-14T21:38:25.5777952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5778033Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5778314Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5778420Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5778690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5778768Z self_outputs = self.self( 2025-08-14T21:38:25.5779037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5779153Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5779696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 863, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5779885Z padded_value = nn.functional.pad(value, (0, 0, window_overlap, window_overlap), value=-1) 2025-08-14T21:38:25.5780093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:38:25.5780194Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:38:25.5780198Z 2025-08-14T21:38:25.5780303Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5780665Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5780747Z layer_outputs = layer_module( 2025-08-14T21:38:25.5780970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5781051Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5781331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5781416Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5781702Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5781781Z self_outputs = self.self( 2025-08-14T21:38:25.5782064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5782185Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5782559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 876, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5782699Z chunked_attn_probs = self._pad_and_diagonalize(chunked_attn_probs) 2025-08-14T21:38:25.5783023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 699, in _pad_and_diagonalize 2025-08-14T21:38:25.5783124Z chunked_hidden_states = nn.functional.pad( 2025-08-14T21:38:25.5783319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:38:25.5783427Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:38:25.5783430Z 2025-08-14T21:38:25.5783536Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5783888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5783967Z layer_outputs = layer_module( 2025-08-14T21:38:25.5784188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5784328Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5784613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5784688Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5785019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5785089Z self_outputs = self.self( 2025-08-14T21:38:25.5785369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5785504Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5785856Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5786016Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:38:25.5786020Z 2025-08-14T21:38:25.5786123Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5786485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5786561Z layer_outputs = layer_module( 2025-08-14T21:38:25.5786782Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5786872Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5787155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5787230Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5787521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5787595Z self_outputs = self.self( 2025-08-14T21:38:25.5787884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5788002Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5788353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5788510Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:38:25.5788514Z 2025-08-14T21:38:25.5788617Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5788977Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5789050Z layer_outputs = layer_module( 2025-08-14T21:38:25.5789274Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5789364Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5789652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5789731Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5790021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5790091Z self_outputs = self.self( 2025-08-14T21:38:25.5790378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 618, in forward 2025-08-14T21:38:25.5790565Z attn_output = attn_output.transpose(0, 1).reshape(seq_len, batch_size, embed_dim).contiguous() 2025-08-14T21:38:25.5790569Z 2025-08-14T21:38:25.5790680Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5790770Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5790874Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5791230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5791333Z layer_outputs = layer_module( 2025-08-14T21:38:25.5791563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5791648Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5791922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1211, in forward 2025-08-14T21:38:25.5792013Z layer_output = apply_chunking_to_forward( 2025-08-14T21:38:25.5792269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:38:25.5792344Z return forward_fn(*input_tensors) 2025-08-14T21:38:25.5792628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1218, in ff_chunk 2025-08-14T21:38:25.5792739Z intermediate_output = self.intermediate(attn_output) 2025-08-14T21:38:25.5793018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1161, in forward 2025-08-14T21:38:25.5793135Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:38:25.5793341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:38:25.5793421Z return self.act(input) 2025-08-14T21:38:25.5793424Z 2025-08-14T21:38:25.5793501Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5793576Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5793687Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5794025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5794096Z layer_outputs = layer_module( 2025-08-14T21:38:25.5794321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5794399Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5794679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5794754Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5795026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5795102Z self_outputs = self.self( 2025-08-14T21:38:25.5795373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5795480Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5795810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5795990Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5795993Z 2025-08-14T21:38:25.5796077Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5796179Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5796526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5796595Z layer_outputs = layer_module( 2025-08-14T21:38:25.5796845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5796932Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5797204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5797305Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5797591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5797657Z self_outputs = self.self( 2025-08-14T21:38:25.5797928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5798024Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5798362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 790, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5798522Z key = self._chunk(key, window_overlap, getattr(self.config, "onnx_export", False)) 2025-08-14T21:38:25.5798793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 719, in _chunk 2025-08-14T21:38:25.5798881Z hidden_states = hidden_states.view( 2025-08-14T21:38:25.5798886Z 2025-08-14T21:38:25.5798989Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5799327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5799406Z layer_outputs = layer_module( 2025-08-14T21:38:25.5799619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5799705Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5799975Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5800048Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5800325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5800396Z self_outputs = self.self( 2025-08-14T21:38:25.5800665Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5800770Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5801096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5801279Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5801283Z 2025-08-14T21:38:25.5801386Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5801727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5801807Z layer_outputs = layer_module( 2025-08-14T21:38:25.5802019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5802102Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5802369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5802441Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5802721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5802788Z self_outputs = self.self( 2025-08-14T21:38:25.5803100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5803199Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5803532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5803748Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5803751Z 2025-08-14T21:38:25.5803854Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5804199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5804269Z layer_outputs = layer_module( 2025-08-14T21:38:25.5804484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5804569Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5804838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5804913Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5805197Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5805263Z self_outputs = self.self( 2025-08-14T21:38:25.5805533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5805627Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5805946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5806128Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5806131Z 2025-08-14T21:38:25.5806206Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5806288Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5806389Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5806719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5806794Z layer_outputs = layer_module( 2025-08-14T21:38:25.5807001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5807076Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5807347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5807420Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5807691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5807757Z self_outputs = self.self( 2025-08-14T21:38:25.5808023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 541, in forward 2025-08-14T21:38:25.5808099Z attn_scores += diagonal_mask 2025-08-14T21:38:25.5808102Z 2025-08-14T21:38:25.5808200Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5808535Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5808602Z layer_outputs = layer_module( 2025-08-14T21:38:25.5808806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5808925Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5809191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5809304Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5809581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5809646Z self_outputs = self.self( 2025-08-14T21:38:25.5809919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 579, in forward 2025-08-14T21:38:25.5809996Z attn_probs = nn.functional.softmax( 2025-08-14T21:38:25.5810000Z 2025-08-14T21:38:25.5810074Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5810181Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5810518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5810592Z layer_outputs = layer_module( 2025-08-14T21:38:25.5810800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5810877Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5811149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5811219Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5811485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5811560Z self_outputs = self.self( 2025-08-14T21:38:25.5811827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5811951Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5812287Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 863, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5812457Z padded_value = nn.functional.pad(value, (0, 0, window_overlap, window_overlap), value=-1) 2025-08-14T21:38:25.5812651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:38:25.5812749Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:38:25.5812752Z 2025-08-14T21:38:25.5812860Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5813204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5813274Z layer_outputs = layer_module( 2025-08-14T21:38:25.5813499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5813577Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5813866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5813945Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5814226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5814304Z self_outputs = self.self( 2025-08-14T21:38:25.5814584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5814700Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5815107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 876, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5815245Z chunked_attn_probs = self._pad_and_diagonalize(chunked_attn_probs) 2025-08-14T21:38:25.5815569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 699, in _pad_and_diagonalize 2025-08-14T21:38:25.5815694Z chunked_hidden_states = nn.functional.pad( 2025-08-14T21:38:25.5815889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:38:25.5815995Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:38:25.5815998Z 2025-08-14T21:38:25.5816101Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5816460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5816532Z layer_outputs = layer_module( 2025-08-14T21:38:25.5816752Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5816839Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5817122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5817208Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5817486Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5817555Z self_outputs = self.self( 2025-08-14T21:38:25.5817842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5817957Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5818310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5818473Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:38:25.5818476Z 2025-08-14T21:38:25.5818583Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5818940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5819011Z layer_outputs = layer_module( 2025-08-14T21:38:25.5819232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5819319Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5819693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5819791Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5820094Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5820170Z self_outputs = self.self( 2025-08-14T21:38:25.5820479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5820600Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5820984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5821149Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:38:25.5821153Z 2025-08-14T21:38:25.5821255Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5821653Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5821726Z layer_outputs = layer_module( 2025-08-14T21:38:25.5821946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5822057Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5822329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5822412Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5822683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5822752Z self_outputs = self.self( 2025-08-14T21:38:25.5823027Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 618, in forward 2025-08-14T21:38:25.5823210Z attn_output = attn_output.transpose(0, 1).reshape(seq_len, batch_size, embed_dim).contiguous() 2025-08-14T21:38:25.5823213Z 2025-08-14T21:38:25.5823299Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5823375Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5823479Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5823825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5823895Z layer_outputs = layer_module( 2025-08-14T21:38:25.5824115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5824191Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5824459Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1211, in forward 2025-08-14T21:38:25.5824556Z layer_output = apply_chunking_to_forward( 2025-08-14T21:38:25.5824809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:38:25.5824885Z return forward_fn(*input_tensors) 2025-08-14T21:38:25.5825178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1218, in ff_chunk 2025-08-14T21:38:25.5825284Z intermediate_output = self.intermediate(attn_output) 2025-08-14T21:38:25.5825559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1161, in forward 2025-08-14T21:38:25.5825669Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:38:25.5825874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:38:25.5825951Z return self.act(input) 2025-08-14T21:38:25.5825955Z 2025-08-14T21:38:25.5826034Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5826116Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5826215Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5826552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5826632Z layer_outputs = layer_module( 2025-08-14T21:38:25.5826841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5826918Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5827192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5827266Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5827580Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5827648Z self_outputs = self.self( 2025-08-14T21:38:25.5827917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5828059Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5828387Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5828574Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5828577Z 2025-08-14T21:38:25.5828654Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5828755Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5829107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5829177Z layer_outputs = layer_module( 2025-08-14T21:38:25.5829398Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5829478Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5829752Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5829834Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5830108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5830175Z self_outputs = self.self( 2025-08-14T21:38:25.5830465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5830564Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5830911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 790, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5831062Z key = self._chunk(key, window_overlap, getattr(self.config, "onnx_export", False)) 2025-08-14T21:38:25.5831335Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 719, in _chunk 2025-08-14T21:38:25.5831418Z hidden_states = hidden_states.view( 2025-08-14T21:38:25.5831421Z 2025-08-14T21:38:25.5831524Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5831879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5831949Z layer_outputs = layer_module( 2025-08-14T21:38:25.5832170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5832256Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5832531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5832616Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5832889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5832956Z self_outputs = self.self( 2025-08-14T21:38:25.5833237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5833335Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5833669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5833896Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5833900Z 2025-08-14T21:38:25.5834001Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5834379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5834449Z layer_outputs = layer_module( 2025-08-14T21:38:25.5834660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5834748Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5835020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5835101Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5835375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5835444Z self_outputs = self.self( 2025-08-14T21:38:25.5835720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5835820Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5836154Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5836328Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5836332Z 2025-08-14T21:38:25.5836431Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5836773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5836844Z layer_outputs = layer_module( 2025-08-14T21:38:25.5837062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5837139Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5837411Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5837490Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5837758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5837825Z self_outputs = self.self( 2025-08-14T21:38:25.5838100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5838195Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5838532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5838710Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5838716Z 2025-08-14T21:38:25.5838793Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5838879Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5838979Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5839327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5839396Z layer_outputs = layer_module( 2025-08-14T21:38:25.5839606Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5839691Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5839994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5840069Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5840356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5840463Z self_outputs = self.self( 2025-08-14T21:38:25.5840746Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 541, in forward 2025-08-14T21:38:25.5840818Z attn_scores += diagonal_mask 2025-08-14T21:38:25.5840821Z 2025-08-14T21:38:25.5840923Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5841271Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5841342Z layer_outputs = layer_module( 2025-08-14T21:38:25.5841562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5841637Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5842075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5842166Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5842445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5842517Z self_outputs = self.self( 2025-08-14T21:38:25.5842802Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 579, in forward 2025-08-14T21:38:25.5842884Z attn_probs = nn.functional.softmax( 2025-08-14T21:38:25.5842888Z 2025-08-14T21:38:25.5842976Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5843089Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5843437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5843520Z layer_outputs = layer_module( 2025-08-14T21:38:25.5843737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5843824Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5844105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5844178Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5844456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5844526Z self_outputs = self.self( 2025-08-14T21:38:25.5844806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5844961Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5845311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 863, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5845490Z padded_value = nn.functional.pad(value, (0, 0, window_overlap, window_overlap), value=-1) 2025-08-14T21:38:25.5845682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:38:25.5845781Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:38:25.5845785Z 2025-08-14T21:38:25.5845895Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5846330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5846414Z layer_outputs = layer_module( 2025-08-14T21:38:25.5846631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5846759Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5847047Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5847122Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5847411Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5847483Z self_outputs = self.self( 2025-08-14T21:38:25.5847764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5847890Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5848238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 876, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5848385Z chunked_attn_probs = self._pad_and_diagonalize(chunked_attn_probs) 2025-08-14T21:38:25.5848704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 699, in _pad_and_diagonalize 2025-08-14T21:38:25.5848797Z chunked_hidden_states = nn.functional.pad( 2025-08-14T21:38:25.5848996Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:38:25.5849094Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:38:25.5849097Z 2025-08-14T21:38:25.5849200Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5849561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5849635Z layer_outputs = layer_module( 2025-08-14T21:38:25.5849863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5849944Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5850223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5850308Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5850589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5850667Z self_outputs = self.self( 2025-08-14T21:38:25.5850945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5851059Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5851413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5851567Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:38:25.5851570Z 2025-08-14T21:38:25.5851683Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5852034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5852117Z layer_outputs = layer_module( 2025-08-14T21:38:25.5852343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5852421Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5852733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5852817Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5853098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5853208Z self_outputs = self.self( 2025-08-14T21:38:25.5853485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5853597Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5853952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5854099Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:38:25.5854103Z 2025-08-14T21:38:25.5854218Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5854576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5854651Z layer_outputs = layer_module( 2025-08-14T21:38:25.5854877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5854957Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5855244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5855318Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5855595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5855673Z self_outputs = self.self( 2025-08-14T21:38:25.5855955Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 618, in forward 2025-08-14T21:38:25.5856141Z attn_output = attn_output.transpose(0, 1).reshape(seq_len, batch_size, embed_dim).contiguous() 2025-08-14T21:38:25.5856163Z 2025-08-14T21:38:25.5856242Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5856319Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5856429Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5856776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5856844Z layer_outputs = layer_module( 2025-08-14T21:38:25.5857066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5857142Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5857425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1211, in forward 2025-08-14T21:38:25.5857508Z layer_output = apply_chunking_to_forward( 2025-08-14T21:38:25.5857763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:38:25.5857849Z return forward_fn(*input_tensors) 2025-08-14T21:38:25.5858126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1218, in ff_chunk 2025-08-14T21:38:25.5858234Z intermediate_output = self.intermediate(attn_output) 2025-08-14T21:38:25.5858524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1161, in forward 2025-08-14T21:38:25.5858633Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:38:25.5858905Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:38:25.5858977Z return self.act(input) 2025-08-14T21:38:25.5858980Z 2025-08-14T21:38:25.5859061Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5859145Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5859327Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5859737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5859818Z layer_outputs = layer_module( 2025-08-14T21:38:25.5860052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5860143Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5860439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5860524Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5860826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5860902Z self_outputs = self.self( 2025-08-14T21:38:25.5861216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5861318Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5861656Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5861857Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5861861Z 2025-08-14T21:38:25.5861937Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5862048Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5862388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5862457Z layer_outputs = layer_module( 2025-08-14T21:38:25.5862681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5862758Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5863026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5863108Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5863380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5863456Z self_outputs = self.self( 2025-08-14T21:38:25.5863725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5863823Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5864161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 790, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5864315Z key = self._chunk(key, window_overlap, getattr(self.config, "onnx_export", False)) 2025-08-14T21:38:25.5864589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 719, in _chunk 2025-08-14T21:38:25.5864663Z hidden_states = hidden_states.view( 2025-08-14T21:38:25.5864666Z 2025-08-14T21:38:25.5864769Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5865115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5865220Z layer_outputs = layer_module( 2025-08-14T21:38:25.5865445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5865522Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5865826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5865910Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5866182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5866250Z self_outputs = self.self( 2025-08-14T21:38:25.5866528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5866625Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5866965Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5867141Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5867147Z 2025-08-14T21:38:25.5867249Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5867594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5867665Z layer_outputs = layer_module( 2025-08-14T21:38:25.5867886Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5867962Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5868236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5868317Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5868587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5868666Z self_outputs = self.self( 2025-08-14T21:38:25.5868939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5869035Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5869369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5869544Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5869549Z 2025-08-14T21:38:25.5869656Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5869998Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5870068Z layer_outputs = layer_module( 2025-08-14T21:38:25.5870290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5870367Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5870639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5870719Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5870990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5871067Z self_outputs = self.self( 2025-08-14T21:38:25.5871367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5871464Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5871799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5872005Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5872008Z 2025-08-14T21:38:25.5872095Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5872172Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5872273Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5872626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5872694Z layer_outputs = layer_module( 2025-08-14T21:38:25.5872912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5872988Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5873254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5873336Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5873603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5873669Z self_outputs = self.self( 2025-08-14T21:38:25.5873940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 541, in forward 2025-08-14T21:38:25.5874010Z attn_scores += diagonal_mask 2025-08-14T21:38:25.5874012Z 2025-08-14T21:38:25.5874118Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5874451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5874521Z layer_outputs = layer_module( 2025-08-14T21:38:25.5874739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5874814Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5875085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5875155Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5875419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5875494Z self_outputs = self.self( 2025-08-14T21:38:25.5875760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 579, in forward 2025-08-14T21:38:25.5875834Z attn_probs = nn.functional.softmax( 2025-08-14T21:38:25.5875845Z 2025-08-14T21:38:25.5875918Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5876014Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5876366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5876433Z layer_outputs = layer_module( 2025-08-14T21:38:25.5876641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5876722Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5876988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5877063Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5877358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5877425Z self_outputs = self.self( 2025-08-14T21:38:25.5877693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5877849Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5878187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 863, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5878357Z padded_value = nn.functional.pad(value, (0, 0, window_overlap, window_overlap), value=-1) 2025-08-14T21:38:25.5878543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:38:25.5878645Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:38:25.5878648Z 2025-08-14T21:38:25.5878749Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5879079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5879157Z layer_outputs = layer_module( 2025-08-14T21:38:25.5879364Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5879445Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5879710Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5879785Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5880059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5880125Z self_outputs = self.self( 2025-08-14T21:38:25.5880395Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5880507Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5880844Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 876, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5880982Z chunked_attn_probs = self._pad_and_diagonalize(chunked_attn_probs) 2025-08-14T21:38:25.5881285Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 699, in _pad_and_diagonalize 2025-08-14T21:38:25.5881372Z chunked_hidden_states = nn.functional.pad( 2025-08-14T21:38:25.5881562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:38:25.5881656Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:38:25.5881660Z 2025-08-14T21:38:25.5881767Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5882098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5882171Z layer_outputs = layer_module( 2025-08-14T21:38:25.5882392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5882467Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5882739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5882812Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5883076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5883151Z self_outputs = self.self( 2025-08-14T21:38:25.5883463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5883584Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5883953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5884097Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:38:25.5884101Z 2025-08-14T21:38:25.5884206Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5884541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5884617Z layer_outputs = layer_module( 2025-08-14T21:38:25.5884827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5884904Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5885174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5885248Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5885511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5885585Z self_outputs = self.self( 2025-08-14T21:38:25.5885845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5885959Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5886289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5886429Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:38:25.5886433Z 2025-08-14T21:38:25.5886541Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5886871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5886948Z layer_outputs = layer_module( 2025-08-14T21:38:25.5887153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5887227Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5887496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5887568Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5887837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5887906Z self_outputs = self.self( 2025-08-14T21:38:25.5888168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 618, in forward 2025-08-14T21:38:25.5888358Z attn_output = attn_output.transpose(0, 1).reshape(seq_len, batch_size, embed_dim).contiguous() 2025-08-14T21:38:25.5888362Z 2025-08-14T21:38:25.5888438Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5888512Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5888618Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5888952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5889028Z layer_outputs = layer_module( 2025-08-14T21:38:25.5889265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5889340Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5889619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1211, in forward 2025-08-14T21:38:25.5889735Z layer_output = apply_chunking_to_forward( 2025-08-14T21:38:25.5889993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:38:25.5890070Z return forward_fn(*input_tensors) 2025-08-14T21:38:25.5890340Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1218, in ff_chunk 2025-08-14T21:38:25.5890450Z intermediate_output = self.intermediate(attn_output) 2025-08-14T21:38:25.5890717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1161, in forward 2025-08-14T21:38:25.5890824Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:38:25.5891037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:38:25.5891109Z return self.act(input) 2025-08-14T21:38:25.5891112Z 2025-08-14T21:38:25.5891195Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5891267Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5891364Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5891704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5891772Z layer_outputs = layer_module( 2025-08-14T21:38:25.5891988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5892061Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5892330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5892408Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5892676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5892741Z self_outputs = self.self( 2025-08-14T21:38:25.5893009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5893103Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5893428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5893600Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5893606Z 2025-08-14T21:38:25.5893681Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5893785Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5894115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5894193Z layer_outputs = layer_module( 2025-08-14T21:38:25.5894399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5894473Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5894745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5894814Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5895110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5895185Z self_outputs = self.self( 2025-08-14T21:38:25.5895447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5895581Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5895900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 790, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5896047Z key = self._chunk(key, window_overlap, getattr(self.config, "onnx_export", False)) 2025-08-14T21:38:25.5896325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 719, in _chunk 2025-08-14T21:38:25.5896397Z hidden_states = hidden_states.view( 2025-08-14T21:38:25.5896401Z 2025-08-14T21:38:25.5896505Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5896844Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5896914Z layer_outputs = layer_module( 2025-08-14T21:38:25.5897134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5897208Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5897485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5897558Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5897825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5897900Z self_outputs = self.self( 2025-08-14T21:38:25.5898168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5898263Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5898594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5898769Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5898772Z 2025-08-14T21:38:25.5898879Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5899288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5899361Z layer_outputs = layer_module( 2025-08-14T21:38:25.5899659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5899750Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5900040Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5900116Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5900398Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5900490Z self_outputs = self.self( 2025-08-14T21:38:25.5900756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5900866Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5901194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5901412Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5901416Z 2025-08-14T21:38:25.5901531Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5901896Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5902016Z layer_outputs = layer_module( 2025-08-14T21:38:25.5902236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5902314Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5902603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5902678Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5902958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5903037Z self_outputs = self.self( 2025-08-14T21:38:25.5903316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5903426Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5903766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5903946Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5903950Z 2025-08-14T21:38:25.5904037Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5904116Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5904227Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5904582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5904654Z layer_outputs = layer_module( 2025-08-14T21:38:25.5904881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5904963Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5905245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5905327Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5905608Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5905686Z self_outputs = self.self( 2025-08-14T21:38:25.5905965Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 541, in forward 2025-08-14T21:38:25.5906038Z attn_scores += diagonal_mask 2025-08-14T21:38:25.5906044Z 2025-08-14T21:38:25.5906155Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5906508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5906590Z layer_outputs = layer_module( 2025-08-14T21:38:25.5906809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5906887Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5907176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5907251Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5907538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5907639Z self_outputs = self.self( 2025-08-14T21:38:25.5907917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 579, in forward 2025-08-14T21:38:25.5908003Z attn_probs = nn.functional.softmax( 2025-08-14T21:38:25.5908037Z 2025-08-14T21:38:25.5908116Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5908218Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5908575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5908644Z layer_outputs = layer_module( 2025-08-14T21:38:25.5908868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5908944Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5909224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5909305Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5909582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5909662Z self_outputs = self.self( 2025-08-14T21:38:25.5909942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5910050Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5910378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 863, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5910539Z padded_value = nn.functional.pad(value, (0, 0, window_overlap, window_overlap), value=-1) 2025-08-14T21:38:25.5910722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:38:25.5910822Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:38:25.5910826Z 2025-08-14T21:38:25.5910923Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5911265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5911335Z layer_outputs = layer_module( 2025-08-14T21:38:25.5911544Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5911626Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5911888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5911966Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5912229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5912296Z self_outputs = self.self( 2025-08-14T21:38:25.5912562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5912674Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5913017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 876, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5913147Z chunked_attn_probs = self._pad_and_diagonalize(chunked_attn_probs) 2025-08-14T21:38:25.5913451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 699, in _pad_and_diagonalize 2025-08-14T21:38:25.5913543Z chunked_hidden_states = nn.functional.pad( 2025-08-14T21:38:25.5913752Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:38:25.5913844Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:38:25.5913855Z 2025-08-14T21:38:25.5913953Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5914309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5914385Z layer_outputs = layer_module( 2025-08-14T21:38:25.5914587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5914658Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5914925Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5914994Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5915259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5915325Z self_outputs = self.self( 2025-08-14T21:38:25.5915581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5915698Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5916022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5916169Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:38:25.5916173Z 2025-08-14T21:38:25.5916269Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5916591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5916667Z layer_outputs = layer_module( 2025-08-14T21:38:25.5916867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5916942Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5917205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5917274Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5917547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5917612Z self_outputs = self.self( 2025-08-14T21:38:25.5917864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5917975Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5918296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5918440Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:38:25.5918446Z 2025-08-14T21:38:25.5918542Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5918862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5918937Z layer_outputs = layer_module( 2025-08-14T21:38:25.5919138Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5919217Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5919503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5919573Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5919834Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5919930Z self_outputs = self.self( 2025-08-14T21:38:25.5920182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 618, in forward 2025-08-14T21:38:25.5920361Z attn_output = attn_output.transpose(0, 1).reshape(seq_len, batch_size, embed_dim).contiguous() 2025-08-14T21:38:25.5920364Z 2025-08-14T21:38:25.5920438Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5920520Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5920617Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5920942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5921018Z layer_outputs = layer_module( 2025-08-14T21:38:25.5921221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5921303Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5921563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1211, in forward 2025-08-14T21:38:25.5921642Z layer_output = apply_chunking_to_forward( 2025-08-14T21:38:25.5921890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:38:25.5921964Z return forward_fn(*input_tensors) 2025-08-14T21:38:25.5922232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1218, in ff_chunk 2025-08-14T21:38:25.5922336Z intermediate_output = self.intermediate(attn_output) 2025-08-14T21:38:25.5922590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1161, in forward 2025-08-14T21:38:25.5922702Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:38:25.5922905Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:38:25.5922973Z return self.act(input) 2025-08-14T21:38:25.5922976Z 2025-08-14T21:38:25.5923056Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5923130Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5923246Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5923565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5923631Z layer_outputs = layer_module( 2025-08-14T21:38:25.5923844Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5923917Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5924182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5924263Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5924525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5924599Z self_outputs = self.self( 2025-08-14T21:38:25.5924867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5924964Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5925342Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5925514Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5925518Z 2025-08-14T21:38:25.5925598Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5925731Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5926061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5926137Z layer_outputs = layer_module( 2025-08-14T21:38:25.5926347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5926427Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5926689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5926765Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5927033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5927097Z self_outputs = self.self( 2025-08-14T21:38:25.5927363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5927464Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5927783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 790, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5927935Z key = self._chunk(key, window_overlap, getattr(self.config, "onnx_export", False)) 2025-08-14T21:38:25.5928199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 719, in _chunk 2025-08-14T21:38:25.5928276Z hidden_states = hidden_states.view( 2025-08-14T21:38:25.5928279Z 2025-08-14T21:38:25.5928386Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5928714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5928793Z layer_outputs = layer_module( 2025-08-14T21:38:25.5929000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5929074Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5929345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5929418Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5929691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5929758Z self_outputs = self.self( 2025-08-14T21:38:25.5930022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5930126Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5930444Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5930615Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5930626Z 2025-08-14T21:38:25.5930725Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5931055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5931132Z layer_outputs = layer_module( 2025-08-14T21:38:25.5931376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5931451Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5931725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5931824Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5932096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5932164Z self_outputs = self.self( 2025-08-14T21:38:25.5932428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5932533Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5932857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5933033Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5933037Z 2025-08-14T21:38:25.5933137Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5933468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5933544Z layer_outputs = layer_module( 2025-08-14T21:38:25.5933751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5933833Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5934100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5934178Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5934451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5934518Z self_outputs = self.self( 2025-08-14T21:38:25.5934784Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-08-14T21:38:25.5934886Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-08-14T21:38:25.5935207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-08-14T21:38:25.5935385Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-08-14T21:38:25.5935389Z 2025-08-14T21:38:25.5935466Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5935540Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5935649Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5935982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5936060Z layer_outputs = layer_module( 2025-08-14T21:38:25.5936270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5936344Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5936616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5936693Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5936957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5937030Z self_outputs = self.self( 2025-08-14T21:38:25.5937332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 541, in forward 2025-08-14T21:38:25.5937412Z attn_scores += diagonal_mask 2025-08-14T21:38:25.5937415Z 2025-08-14T21:38:25.5937514Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5937869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5937945Z layer_outputs = layer_module( 2025-08-14T21:38:25.5938157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5938237Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5938501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5938572Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5938845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5938910Z self_outputs = self.self( 2025-08-14T21:38:25.5939173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 579, in forward 2025-08-14T21:38:25.5939259Z attn_probs = nn.functional.softmax( 2025-08-14T21:38:25.5939262Z 2025-08-14T21:38:25.5939335Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5939438Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5939845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5939918Z layer_outputs = layer_module( 2025-08-14T21:38:25.5940148Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5940230Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5940521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5940599Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5940878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5940955Z self_outputs = self.self( 2025-08-14T21:38:25.5941243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5941356Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5941705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 863, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5942082Z padded_value = nn.functional.pad(value, (0, 0, window_overlap, window_overlap), value=-1) 2025-08-14T21:38:25.5942281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:38:25.5942375Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:38:25.5942381Z 2025-08-14T21:38:25.5942480Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5942820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5942892Z layer_outputs = layer_module( 2025-08-14T21:38:25.5943109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5943184Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5943540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5943621Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5943886Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5944003Z self_outputs = self.self( 2025-08-14T21:38:25.5944266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5944376Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5944798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 876, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5944929Z chunked_attn_probs = self._pad_and_diagonalize(chunked_attn_probs) 2025-08-14T21:38:25.5945238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 699, in _pad_and_diagonalize 2025-08-14T21:38:25.5945326Z chunked_hidden_states = nn.functional.pad( 2025-08-14T21:38:25.5945514Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-08-14T21:38:25.5945609Z return torch._C._nn.pad(input, pad, mode, value) 2025-08-14T21:38:25.5945612Z 2025-08-14T21:38:25.5945710Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5946052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5946121Z layer_outputs = layer_module( 2025-08-14T21:38:25.5946341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5946415Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5946683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5946764Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5947036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5947106Z self_outputs = self.self( 2025-08-14T21:38:25.5947382Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5947494Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5947835Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5947981Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:38:25.5947984Z 2025-08-14T21:38:25.5948082Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5948425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5948496Z layer_outputs = layer_module( 2025-08-14T21:38:25.5948718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5948793Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5949061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5949142Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5949405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5949479Z self_outputs = self.self( 2025-08-14T21:38:25.5949783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-08-14T21:38:25.5949894Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-08-14T21:38:25.5950234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-08-14T21:38:25.5950409Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-08-14T21:38:25.5950413Z 2025-08-14T21:38:25.5950516Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5950852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5950921Z layer_outputs = layer_module( 2025-08-14T21:38:25.5951135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5951212Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5951477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-08-14T21:38:25.5951556Z self_attn_outputs = self.attention( 2025-08-14T21:38:25.5951823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-08-14T21:38:25.5951896Z self_outputs = self.self( 2025-08-14T21:38:25.5952162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 618, in forward 2025-08-14T21:38:25.5952338Z attn_output = attn_output.transpose(0, 1).reshape(seq_len, batch_size, embed_dim).contiguous() 2025-08-14T21:38:25.5952342Z 2025-08-14T21:38:25.5952427Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5952502Z cudagraph partition due to non gpu ops 2025-08-14T21:38:25.5952609Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:38:25.5952943Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-08-14T21:38:25.5953010Z layer_outputs = layer_module( 2025-08-14T21:38:25.5953231Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:38:25.5953305Z return super().__call__(*args, **kwargs) 2025-08-14T21:38:25.5953577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1211, in forward 2025-08-14T21:38:25.5953667Z layer_output = apply_chunking_to_forward( 2025-08-14T21:38:25.5953914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:38:25.5953997Z return forward_fn(*input_tensors) 2025-08-14T21:38:25.5954270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1218, in ff_chunk 2025-08-14T21:38:25.5954375Z intermediate_output = self.intermediate(attn_output) 2025-08-14T21:38:25.5954651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1161, in forward 2025-08-14T21:38:25.5954760Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:38:25.5954969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:38:25.5955035Z return self.act(input) 2025-08-14T21:38:25.5955038Z 2025-08-14T21:38:25.5955115Z cudagraph partition due to non gpu ops 2025-08-14T21:39:07.0075892Z cudagraph partition due to non gpu ops 2025-08-14T21:39:07.0077615Z cudagraph partition due to non gpu ops 2025-08-14T21:39:07.0077893Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:07.0080057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1723, in torch_dynamo_resume_in_forward_at_1703 2025-08-14T21:39:07.0080723Z masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1)) 2025-08-14T21:39:07.0081165Z 2025-08-14T21:39:08.7283100Z Compilation time (from dynamo_timed): 74.19516814 2025-08-14T21:39:08.7519048Z pass 2025-08-14T21:39:08.7523238Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:39:08.7525359Z TIMING: gc:0.00717 entire_frame_compile:74.19517 _recursive_pre_grad_passes:0.13073 _recursive_joint_graph_passes:1.02186 _recursive_post_grad_passes:0.91154 async_compile.wait:2.94608 code_gen:51.21783 inductor_compile:55.72578 backend_compile:68.90188 total_wall_time:74.19517 2025-08-14T21:39:08.7526343Z STATS: call_* op count: 1787 | FakeTensorMode.__torch_dispatch__:71895 | FakeTensor.__torch_dispatch__:9282 | ProxyTorchDispatchMode.__torch_dispatch__:18266 2025-08-14T21:39:08.7526884Z Dynamo produced 4 graphs covering 1787 ops with 4 graph breaks (1 unique) 2025-08-14T21:39:15.2401735Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:39:15.2402756Z from pkg_resources import resource_filename 2025-08-14T21:39:15.8704012Z 2025-08-14T21:39:18.7946438Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:39:18.7946713Z loading model: 0it [00:02, ?it/s] 2025-08-14T21:39:18.7967221Z cpu eval BartForCausalLM 2025-08-14T21:39:20.4523787Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:39:21.1279513Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:39:21.7680249Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:39:31.2697236Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2697613Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2697889Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2698121Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2698349Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2698573Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2698800Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2699029Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2699247Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2699471Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2699862Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2700082Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2700314Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2700556Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2700780Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2701010Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2701231Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2701464Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2701682Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2701946Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:31.2702367Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:31.2702723Z return mod(**inputs) 2025-08-14T21:39:31.2703164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:39:31.2703598Z outputs = self.model.decoder( 2025-08-14T21:39:31.2704404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:31.2704846Z layer_outputs = decoder_layer( 2025-08-14T21:39:31.2705233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:31.2705628Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:31.2706157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:31.2706603Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:31.2707042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:31.2707476Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:31.2707964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:31.2708490Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:31.2708707Z 2025-08-14T21:39:31.2708833Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:31.2709226Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:31.2709588Z return mod(**inputs) 2025-08-14T21:39:31.2709984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:39:31.2710414Z outputs = self.model.decoder( 2025-08-14T21:39:31.2710821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:31.2711241Z layer_outputs = decoder_layer( 2025-08-14T21:39:31.2711616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:31.2712007Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:31.2712413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:31.2712894Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:31.2713303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:31.2713712Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:31.2714155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:31.2714616Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:31.2714779Z 2025-08-14T21:39:31.2714871Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2715078Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2715322Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:31.2715691Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:31.2716026Z return mod(**inputs) 2025-08-14T21:39:31.2716381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:39:31.2716771Z outputs = self.model.decoder( 2025-08-14T21:39:31.2717153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:31.2717532Z layer_outputs = decoder_layer( 2025-08-14T21:39:31.2717878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:31.2718242Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:31.2718627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:39:31.2719080Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:39:31.2719536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:39:31.2719886Z return self.act(input) 2025-08-14T21:39:31.2719998Z 2025-08-14T21:39:31.2720077Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2720375Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2720584Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2720792Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2720993Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2721198Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2721405Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2721602Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2721840Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:31.2722208Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:31.2722535Z return mod(**inputs) 2025-08-14T21:39:31.2722906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:39:31.2723297Z outputs = self.model.decoder( 2025-08-14T21:39:31.2723686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:31.2724065Z layer_outputs = decoder_layer( 2025-08-14T21:39:31.2724425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:31.2724794Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:31.2725174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:31.2725588Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:31.2725994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:31.2726405Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:31.2726844Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:31.2727325Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:31.2727517Z 2025-08-14T21:39:31.2727626Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:31.2727991Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:31.2728427Z return mod(**inputs) 2025-08-14T21:39:31.2728789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:39:31.2729178Z outputs = self.model.decoder( 2025-08-14T21:39:31.2729551Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:31.2729960Z layer_outputs = decoder_layer( 2025-08-14T21:39:31.2730332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:31.2730717Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:31.2731122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:31.2731564Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:31.2731995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:31.2732434Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:31.2732893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:31.2733374Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:31.2733546Z 2025-08-14T21:39:31.2733671Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2733892Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2734144Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:31.2734526Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:31.2734905Z return mod(**inputs) 2025-08-14T21:39:31.2735283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:39:31.2735692Z outputs = self.model.decoder( 2025-08-14T21:39:31.2736094Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:31.2736493Z layer_outputs = decoder_layer( 2025-08-14T21:39:31.2736863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:31.2737246Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:31.2737662Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:39:31.2738113Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:39:31.2738535Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:39:31.2738899Z return self.act(input) 2025-08-14T21:39:31.2739018Z 2025-08-14T21:39:31.2739110Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2739327Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2739664Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2739903Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2740126Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2740358Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2740592Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2740805Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2741064Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:31.2741464Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:31.2742032Z return mod(**inputs) 2025-08-14T21:39:31.2742441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:39:31.2742860Z outputs = self.model.decoder( 2025-08-14T21:39:31.2743265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:31.2743665Z layer_outputs = decoder_layer( 2025-08-14T21:39:31.2744038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:31.2744425Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:31.2744837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:31.2745274Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:31.2745712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:31.2746150Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:31.2746613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:31.2747129Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:31.2747318Z 2025-08-14T21:39:31.2747424Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:31.2747788Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:31.2748109Z return mod(**inputs) 2025-08-14T21:39:31.2748585Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:39:31.2748976Z outputs = self.model.decoder( 2025-08-14T21:39:31.2749352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:31.2749798Z layer_outputs = decoder_layer( 2025-08-14T21:39:31.2750152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:31.2750520Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:31.2750906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:31.2751323Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:31.2751735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:31.2752144Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:31.2752586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:31.2753047Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:31.2753212Z 2025-08-14T21:39:31.2753302Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2753507Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2753747Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:31.2754111Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:31.2754438Z return mod(**inputs) 2025-08-14T21:39:31.2754794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:39:31.2755182Z outputs = self.model.decoder( 2025-08-14T21:39:31.2755566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:31.2755947Z layer_outputs = decoder_layer( 2025-08-14T21:39:31.2756298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:31.2756674Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:31.2757047Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:39:31.2757448Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:39:31.2757822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:39:31.2758150Z return self.act(input) 2025-08-14T21:39:31.2758255Z 2025-08-14T21:39:31.2758339Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2758563Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2758762Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2758961Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2759150Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2759350Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2759545Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2759733Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2759965Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:31.2760316Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:31.2760625Z return mod(**inputs) 2025-08-14T21:39:31.2760964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:39:31.2761334Z outputs = self.model.decoder( 2025-08-14T21:39:31.2761706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:31.2762075Z layer_outputs = decoder_layer( 2025-08-14T21:39:31.2762451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:31.2762806Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:31.2763184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:31.2763612Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:31.2764012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:31.2764412Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:31.2764845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:31.2765316Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:31.2765501Z 2025-08-14T21:39:31.2765603Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:31.2765962Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:31.2766274Z return mod(**inputs) 2025-08-14T21:39:31.2766628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:39:31.2767014Z outputs = self.model.decoder( 2025-08-14T21:39:31.2767385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:31.2767754Z layer_outputs = decoder_layer( 2025-08-14T21:39:31.2768091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:31.2768447Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:31.2768821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:31.2769227Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:31.2769639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:31.2770048Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:31.2770493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:31.2770954Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:31.2771119Z 2025-08-14T21:39:31.2771208Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2771414Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2771655Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:31.2772022Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:31.2772350Z return mod(**inputs) 2025-08-14T21:39:31.2772713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:39:31.2773110Z outputs = self.model.decoder( 2025-08-14T21:39:31.2773496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:31.2773881Z layer_outputs = decoder_layer( 2025-08-14T21:39:31.2774233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:31.2774598Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:31.2774991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:39:31.2775414Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:39:31.2775812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:39:31.2776195Z return self.act(input) 2025-08-14T21:39:31.2776308Z 2025-08-14T21:39:31.2776397Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2776607Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2776824Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2777078Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2777277Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2777483Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2777691Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2777892Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2778128Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:31.2778496Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:31.2778831Z return mod(**inputs) 2025-08-14T21:39:31.2779191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:39:31.2779694Z outputs = self.model.decoder( 2025-08-14T21:39:31.2780107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:31.2780521Z layer_outputs = decoder_layer( 2025-08-14T21:39:31.2780899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:31.2781268Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:31.2781660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:31.2782065Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:31.2782475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:31.2782894Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:31.2783339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:31.2783822Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:31.2784014Z 2025-08-14T21:39:31.2784120Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:31.2784484Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:31.2784804Z return mod(**inputs) 2025-08-14T21:39:31.2785164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:39:31.2785551Z outputs = self.model.decoder( 2025-08-14T21:39:31.2785932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:31.2786313Z layer_outputs = decoder_layer( 2025-08-14T21:39:31.2786666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:31.2787034Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:31.2787413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:31.2787829Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:31.2788234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:31.2788686Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:31.2789123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:31.2789585Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:31.2789750Z 2025-08-14T21:39:31.2789842Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2790056Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2790335Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:31.2790697Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:31.2791018Z return mod(**inputs) 2025-08-14T21:39:31.2791404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:39:31.2791786Z outputs = self.model.decoder( 2025-08-14T21:39:31.2792161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:31.2792539Z layer_outputs = decoder_layer( 2025-08-14T21:39:31.2792880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:31.2793239Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:31.2793635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:39:31.2794062Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:39:31.2794469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:39:31.2794808Z return self.act(input) 2025-08-14T21:39:31.2794918Z 2025-08-14T21:39:31.2795007Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2795209Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2795419Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2795629Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2795828Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2796037Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2796245Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2796444Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2796681Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:31.2797038Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:31.2797362Z return mod(**inputs) 2025-08-14T21:39:31.2797713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:39:31.2798099Z outputs = self.model.decoder( 2025-08-14T21:39:31.2798472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:31.2798846Z layer_outputs = decoder_layer( 2025-08-14T21:39:31.2799194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:31.2799551Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:31.2799934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:31.2800327Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:31.2800726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:31.2801129Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:31.2801562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:31.2802033Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:31.2802218Z 2025-08-14T21:39:31.2802323Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:31.2802677Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:31.2802994Z return mod(**inputs) 2025-08-14T21:39:31.2803354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:39:31.2803735Z outputs = self.model.decoder( 2025-08-14T21:39:31.2804149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:31.2804522Z layer_outputs = decoder_layer( 2025-08-14T21:39:31.2804864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:31.2805261Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:31.2805632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:31.2806032Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:31.2806425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:31.2806819Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:31.2807245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:31.2807695Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:31.2807852Z 2025-08-14T21:39:31.2807938Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2808149Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2808374Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:31.2808727Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:31.2809048Z return mod(**inputs) 2025-08-14T21:39:31.2809395Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:39:31.2809775Z outputs = self.model.decoder( 2025-08-14T21:39:31.2810152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:31.2810521Z layer_outputs = decoder_layer( 2025-08-14T21:39:31.2810851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:31.2811196Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:31.2811561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:39:31.2811959Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:39:31.2812330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:39:31.2812662Z return self.act(input) 2025-08-14T21:39:31.2812771Z 2025-08-14T21:39:31.2812855Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2813057Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2813260Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2813465Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2813659Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2813865Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2814066Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2814260Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2814490Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:31.2814846Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:31.2815161Z return mod(**inputs) 2025-08-14T21:39:31.2815508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:39:31.2815882Z outputs = self.model.decoder( 2025-08-14T21:39:31.2816253Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:31.2816608Z layer_outputs = decoder_layer( 2025-08-14T21:39:31.2816938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:31.2817388Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:31.2817788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:31.2818188Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:31.2818623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:31.2819024Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:31.2819448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:31.2819993Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:31.2820193Z 2025-08-14T21:39:31.2820309Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:31.2820712Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:31.2821045Z return mod(**inputs) 2025-08-14T21:39:31.2821413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:39:31.2821808Z outputs = self.model.decoder( 2025-08-14T21:39:31.2822198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:31.2822568Z layer_outputs = decoder_layer( 2025-08-14T21:39:31.2822929Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:31.2823285Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:31.2823653Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:31.2824055Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:31.2824449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:31.2824847Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:31.2825269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:31.2825710Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:31.2825864Z 2025-08-14T21:39:31.2825947Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2826150Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2826376Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:31.2826727Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:31.2827044Z return mod(**inputs) 2025-08-14T21:39:31.2827389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:39:31.2827776Z outputs = self.model.decoder( 2025-08-14T21:39:31.2828149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:31.2828539Z layer_outputs = decoder_layer( 2025-08-14T21:39:31.2828875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:31.2829231Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:31.2829604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:39:31.2830014Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:39:31.2830400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:39:31.2830733Z return self.act(input) 2025-08-14T21:39:31.2830838Z 2025-08-14T21:39:31.2830921Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2831166Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2831372Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2831579Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2831773Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2832009Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2832210Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2832400Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2832639Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:31.2833005Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:31.2833329Z return mod(**inputs) 2025-08-14T21:39:31.2833689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:39:31.2834065Z outputs = self.model.decoder( 2025-08-14T21:39:31.2834437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:31.2834804Z layer_outputs = decoder_layer( 2025-08-14T21:39:31.2835149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:31.2835506Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:31.2835884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:31.2836275Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:31.2836667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:31.2837058Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:31.2837491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:31.2837950Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:31.2838132Z 2025-08-14T21:39:31.2838233Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:31.2838582Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:31.2838893Z return mod(**inputs) 2025-08-14T21:39:31.2839242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:39:31.2839616Z outputs = self.model.decoder( 2025-08-14T21:39:31.2839983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:31.2840349Z layer_outputs = decoder_layer( 2025-08-14T21:39:31.2840691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:31.2841042Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:31.2841410Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:31.2841972Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:31.2842413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:31.2842813Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:31.2843248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:31.2843704Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:31.2843870Z 2025-08-14T21:39:31.2843950Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2844159Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2844386Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:31.2844831Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:31.2845155Z return mod(**inputs) 2025-08-14T21:39:31.2845509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:39:31.2845965Z outputs = self.model.decoder( 2025-08-14T21:39:31.2846326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:31.2846692Z layer_outputs = decoder_layer( 2025-08-14T21:39:31.2847018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:31.2847363Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:31.2847733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:39:31.2848132Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:39:31.2848513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:39:31.2848849Z return self.act(input) 2025-08-14T21:39:31.2848957Z 2025-08-14T21:39:31.2849045Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2849247Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2849451Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2849654Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2849855Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2850069Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2850270Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2850460Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2850685Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:31.2851034Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:31.2851354Z return mod(**inputs) 2025-08-14T21:39:31.2851692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:39:31.2852062Z outputs = self.model.decoder( 2025-08-14T21:39:31.2852425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:31.2852791Z layer_outputs = decoder_layer( 2025-08-14T21:39:31.2853132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:31.2853484Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:31.2853861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:31.2854251Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:31.2854646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:31.2855042Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:31.2855473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:31.2855935Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:31.2856117Z 2025-08-14T21:39:31.2856222Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:31.2856573Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:31.2856884Z return mod(**inputs) 2025-08-14T21:39:31.2857236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:39:31.2857612Z outputs = self.model.decoder( 2025-08-14T21:39:31.2858057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:31.2858424Z layer_outputs = decoder_layer( 2025-08-14T21:39:31.2858767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:31.2859128Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:31.2859594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:31.2860031Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:31.2860467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:31.2860913Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:31.2861363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:31.2861832Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:31.2861999Z 2025-08-14T21:39:31.2862078Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2862285Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2862504Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:31.2862853Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:31.2863172Z return mod(**inputs) 2025-08-14T21:39:31.2863519Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:39:31.2863908Z outputs = self.model.decoder( 2025-08-14T21:39:31.2864287Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:31.2864670Z layer_outputs = decoder_layer( 2025-08-14T21:39:31.2865018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:31.2865428Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:31.2865819Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:39:31.2866241Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:39:31.2866636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:39:31.2866978Z return self.act(input) 2025-08-14T21:39:31.2867086Z 2025-08-14T21:39:31.2867172Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2867381Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2867592Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2867799Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2867999Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2868210Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2868418Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2868621Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2868858Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:31.2869221Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:31.2869551Z return mod(**inputs) 2025-08-14T21:39:31.2869908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:39:31.2870296Z outputs = self.model.decoder( 2025-08-14T21:39:31.2870676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:31.2871054Z layer_outputs = decoder_layer( 2025-08-14T21:39:31.2871407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:31.2871775Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:31.2872212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:31.2872622Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:31.2873027Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:31.2873468Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:31.2873918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:31.2874400Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:31.2874581Z 2025-08-14T21:39:31.2874681Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:31.2875027Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:31.2875330Z return mod(**inputs) 2025-08-14T21:39:31.2875679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:39:31.2876046Z outputs = self.model.decoder( 2025-08-14T21:39:31.2876409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:31.2876776Z layer_outputs = decoder_layer( 2025-08-14T21:39:31.2877108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:31.2877459Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:31.2877819Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:31.2878212Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:31.2878598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:31.2878991Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:31.2879410Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:31.2879844Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:31.2880008Z 2025-08-14T21:39:31.2880083Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2880282Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2880500Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:31.2880844Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:31.2881153Z return mod(**inputs) 2025-08-14T21:39:31.2881494Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:39:31.2881865Z outputs = self.model.decoder( 2025-08-14T21:39:31.2882226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:31.2882594Z layer_outputs = decoder_layer( 2025-08-14T21:39:31.2882925Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:31.2883272Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:31.2883641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:39:31.2884042Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:39:31.2884419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:39:31.2884750Z return self.act(input) 2025-08-14T21:39:31.2884853Z 2025-08-14T21:39:31.2884937Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2885132Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2885388Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2885590Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2885786Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2886007Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2886210Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2886441Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2886682Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:31.2887042Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:31.2887371Z return mod(**inputs) 2025-08-14T21:39:31.2887725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:39:31.2888111Z outputs = self.model.decoder( 2025-08-14T21:39:31.2888489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:31.2888872Z layer_outputs = decoder_layer( 2025-08-14T21:39:31.2889213Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:31.2889566Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:31.2889944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:31.2890333Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:31.2890730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:31.2891125Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:31.2891551Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:31.2892003Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:31.2892184Z 2025-08-14T21:39:31.2892290Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:31.2892642Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:31.2892959Z return mod(**inputs) 2025-08-14T21:39:31.2893315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:39:31.2893690Z outputs = self.model.decoder( 2025-08-14T21:39:31.2894056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:31.2894425Z layer_outputs = decoder_layer( 2025-08-14T21:39:31.2894762Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:31.2895114Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:31.2895490Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:31.2895879Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:31.2896270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:31.2896666Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:31.2897085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:31.2897527Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:31.2897694Z 2025-08-14T21:39:31.2897775Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2897986Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2898211Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:31.2898562Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:31.2898926Z return mod(**inputs) 2025-08-14T21:39:31.2899265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:39:31.2899713Z outputs = self.model.decoder( 2025-08-14T21:39:31.2900120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:31.2900548Z layer_outputs = decoder_layer( 2025-08-14T21:39:31.2900921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:31.2901327Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:31.2901725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:39:31.2902156Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:39:31.2902540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:39:31.2902868Z return self.act(input) 2025-08-14T21:39:31.2902976Z 2025-08-14T21:39:31.2903062Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2903262Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2903470Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2903676Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2903871Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2904073Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2904277Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2904478Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2904699Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:31.2905049Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:31.2905371Z return mod(**inputs) 2025-08-14T21:39:31.2905723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:39:31.2906108Z outputs = self.model.decoder( 2025-08-14T21:39:31.2906475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:31.2906859Z layer_outputs = decoder_layer( 2025-08-14T21:39:31.2907191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:31.2907544Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:31.2907917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:31.2908303Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:31.2908697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:31.2909089Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:31.2909517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:39:31.2909966Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:39:31.2910149Z 2025-08-14T21:39:31.2910255Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:31.2910603Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:31.2910908Z return mod(**inputs) 2025-08-14T21:39:31.2911255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:39:31.2911628Z outputs = self.model.decoder( 2025-08-14T21:39:31.2911993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:31.2912358Z layer_outputs = decoder_layer( 2025-08-14T21:39:31.2912737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:31.2913085Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:31.2913455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:39:31.2913874Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:39:31.2914263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:39:31.2914657Z attn_output, attn_weights = attention_interface( 2025-08-14T21:39:31.2915086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:39:31.2915526Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:39:31.2915690Z 2025-08-14T21:39:31.2915768Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2915978Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2916199Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:31.2916563Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:31.2916888Z return mod(**inputs) 2025-08-14T21:39:31.2917236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-08-14T21:39:31.2917729Z outputs = self.model.decoder( 2025-08-14T21:39:31.2918107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:39:31.2918486Z layer_outputs = decoder_layer( 2025-08-14T21:39:31.2918823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:39:31.2919221Z return super().__call__(*args, **kwargs) 2025-08-14T21:39:31.2919606Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:39:31.2920027Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:39:31.2920403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:39:31.2920741Z return self.act(input) 2025-08-14T21:39:31.2920849Z 2025-08-14T21:39:31.2920933Z cudagraph partition due to non gpu ops 2025-08-14T21:39:31.2921162Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:31.2921517Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:31.2921835Z return mod(**inputs) 2025-08-14T21:39:31.2922188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1917, in forward 2025-08-14T21:39:31.2922559Z logits = self.lm_head(outputs[0]) 2025-08-14T21:39:31.2922694Z 2025-08-14T21:39:31.2922800Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:39:31.2923149Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:39:31.2923463Z return mod(**inputs) 2025-08-14T21:39:31.2923817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1923, in forward 2025-08-14T21:39:31.2924264Z loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1)) 2025-08-14T21:39:31.2924458Z 2025-08-14T21:39:41.4974150Z Compilation time (from dynamo_timed): 17.740034185 2025-08-14T21:39:41.5199047Z pass 2025-08-14T21:39:41.5199514Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:39:41.5202992Z TIMING: _recursive_pre_grad_passes:0.03679 _recursive_joint_graph_passes:0.67283 _recursive_post_grad_passes:0.07879 async_compile.wait:0.81798 code_gen:9.51625 inductor_compile:11.21139 backend_compile:15.5019 gc:0.00198 entire_frame_compile:17.74003 total_wall_time:17.74003 2025-08-14T21:39:41.5204100Z STATS: call_* op count: 372 | FakeTensorMode.__torch_dispatch__:24843 | FakeTensor.__torch_dispatch__:3951 | ProxyTorchDispatchMode.__torch_dispatch__:5633 2025-08-14T21:39:41.5206598Z Dynamo produced 1 graphs covering 372 ops with 0 graph breaks (0 unique) 2025-08-14T21:39:47.3620327Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:39:47.3621467Z from pkg_resources import resource_filename 2025-08-14T21:39:48.1862253Z 2025-08-14T21:39:53.5743734Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:39:53.5744029Z loading model: 0it [00:05, ?it/s] 2025-08-14T21:39:53.5775270Z cpu eval BartForConditionalGeneration 2025-08-14T21:39:57.1779331Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:39:58.4973378Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:39:59.7914478Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:40:20.8745797Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8746118Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8746327Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8746558Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8746765Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8746959Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8747162Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8747362Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8747555Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8747815Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8748019Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8748256Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8748454Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8748660Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8748882Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8749088Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8749287Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8749491Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8749695Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8749927Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.8750308Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.8750646Z return mod(**inputs) 2025-08-14T21:40:20.8751036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.8751457Z outputs = self.model( 2025-08-14T21:40:20.8751824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:40:20.8752216Z encoder_outputs = self.encoder( 2025-08-14T21:40:20.8752615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:40:20.8753011Z layer_outputs = encoder_layer( 2025-08-14T21:40:20.8753363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.8753727Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.8754112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:40:20.8754509Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:40:20.8755270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.8755684Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.8756122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:40:20.8757661Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:20.8757857Z 2025-08-14T21:40:20.8757967Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.8758333Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.8758653Z return mod(**inputs) 2025-08-14T21:40:20.8759017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.8759399Z outputs = self.model( 2025-08-14T21:40:20.8759758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:40:20.8760143Z encoder_outputs = self.encoder( 2025-08-14T21:40:20.8760518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:40:20.8772128Z layer_outputs = encoder_layer( 2025-08-14T21:40:20.8772671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.8773084Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.8773503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:40:20.8773949Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:40:20.8774391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.8774836Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.8775327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:40:20.8775829Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:40:20.8776023Z 2025-08-14T21:40:20.8776126Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8776362Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8776640Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.8777052Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.8777421Z return mod(**inputs) 2025-08-14T21:40:20.8777818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.8778230Z outputs = self.model( 2025-08-14T21:40:20.8778690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:40:20.8779106Z encoder_outputs = self.encoder( 2025-08-14T21:40:20.8779660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:40:20.8780109Z layer_outputs = encoder_layer( 2025-08-14T21:40:20.8780500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.8780901Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.8781330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 323, in forward 2025-08-14T21:40:20.8781787Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:40:20.8782195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:20.8782561Z return self.act(input) 2025-08-14T21:40:20.8782690Z 2025-08-14T21:40:20.8782895Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8783129Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8783353Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8783577Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8783875Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8784092Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8784312Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8784531Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8784776Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.8785175Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.8785528Z return mod(**inputs) 2025-08-14T21:40:20.8785918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.8786451Z outputs = self.model( 2025-08-14T21:40:20.8786848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:40:20.8787242Z encoder_outputs = self.encoder( 2025-08-14T21:40:20.8787636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:40:20.8788054Z layer_outputs = encoder_layer( 2025-08-14T21:40:20.8788431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.8788827Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.8789228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:40:20.8789656Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:40:20.8790089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.8790525Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.8790993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:40:20.8791515Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:20.8791713Z 2025-08-14T21:40:20.8791835Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.8792213Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.8792576Z return mod(**inputs) 2025-08-14T21:40:20.8792962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.8793375Z outputs = self.model( 2025-08-14T21:40:20.8793753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:40:20.8794165Z encoder_outputs = self.encoder( 2025-08-14T21:40:20.8794563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:40:20.8794960Z layer_outputs = encoder_layer( 2025-08-14T21:40:20.8795334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.8795725Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.8796133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:40:20.8796555Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:40:20.8796979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.8797420Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.8797983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:40:20.8798474Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:40:20.8798653Z 2025-08-14T21:40:20.8798741Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8799007Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8799254Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.8799645Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.8800248Z return mod(**inputs) 2025-08-14T21:40:20.8800632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.8801030Z outputs = self.model( 2025-08-14T21:40:20.8801415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:40:20.8801825Z encoder_outputs = self.encoder( 2025-08-14T21:40:20.8802220Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:40:20.8802627Z layer_outputs = encoder_layer( 2025-08-14T21:40:20.8803011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.8803407Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.8803810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 323, in forward 2025-08-14T21:40:20.8804267Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:40:20.8804682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:20.8805064Z return self.act(input) 2025-08-14T21:40:20.8805184Z 2025-08-14T21:40:20.8805268Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8805500Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8805726Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8805939Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8806158Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8806380Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8806593Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8806814Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8807065Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.8807450Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.8807808Z return mod(**inputs) 2025-08-14T21:40:20.8808201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.8808607Z outputs = self.model( 2025-08-14T21:40:20.8808961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:40:20.8809350Z encoder_outputs = self.encoder( 2025-08-14T21:40:20.8809724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:40:20.8810111Z layer_outputs = encoder_layer( 2025-08-14T21:40:20.8810457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.8810826Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.8811213Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:40:20.8811628Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:40:20.8812057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.8812464Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.8812958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:40:20.8813467Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:20.8813713Z 2025-08-14T21:40:20.8813825Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.8814212Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.8814563Z return mod(**inputs) 2025-08-14T21:40:20.8814952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.8815370Z outputs = self.model( 2025-08-14T21:40:20.8815776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:40:20.8816185Z encoder_outputs = self.encoder( 2025-08-14T21:40:20.8816591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:40:20.8816999Z layer_outputs = encoder_layer( 2025-08-14T21:40:20.8817372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.8817754Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.8818162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:40:20.8818598Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:40:20.8819024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.8819465Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.8820063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:40:20.8820569Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:40:20.8820749Z 2025-08-14T21:40:20.8820839Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8821077Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8821344Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.8821745Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.8822099Z return mod(**inputs) 2025-08-14T21:40:20.8822500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.8822923Z outputs = self.model( 2025-08-14T21:40:20.8823315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:40:20.8823740Z encoder_outputs = self.encoder( 2025-08-14T21:40:20.8824165Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:40:20.8824582Z layer_outputs = encoder_layer( 2025-08-14T21:40:20.8824959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.8825360Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.8825787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 323, in forward 2025-08-14T21:40:20.8826246Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:40:20.8826678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:20.8827053Z return self.act(input) 2025-08-14T21:40:20.8827174Z 2025-08-14T21:40:20.8827270Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8827497Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8827724Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8828012Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8828234Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8828460Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8828686Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8828936Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8829189Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.8829580Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.8829942Z return mod(**inputs) 2025-08-14T21:40:20.8830325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.8830740Z outputs = self.model( 2025-08-14T21:40:20.8831131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:40:20.8831538Z encoder_outputs = self.encoder( 2025-08-14T21:40:20.8831947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:40:20.8832359Z layer_outputs = encoder_layer( 2025-08-14T21:40:20.8832741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.8833136Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.8833555Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:40:20.8833993Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:40:20.8834424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.8834870Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.8835362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:40:20.8835873Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:20.8836075Z 2025-08-14T21:40:20.8836187Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.8836571Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.8836929Z return mod(**inputs) 2025-08-14T21:40:20.8837312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.8837709Z outputs = self.model( 2025-08-14T21:40:20.8838093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:40:20.8838510Z encoder_outputs = self.encoder( 2025-08-14T21:40:20.8838901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:40:20.8839301Z layer_outputs = encoder_layer( 2025-08-14T21:40:20.8839666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.8840060Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.8840460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:40:20.8840880Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:40:20.8841296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.8841721Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.8842405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:40:20.8842893Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:40:20.8843065Z 2025-08-14T21:40:20.8843294Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8843516Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8843769Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.8844155Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.8844571Z return mod(**inputs) 2025-08-14T21:40:20.8844947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.8845359Z outputs = self.model( 2025-08-14T21:40:20.8845743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:40:20.8846145Z encoder_outputs = self.encoder( 2025-08-14T21:40:20.8846550Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:40:20.8846955Z layer_outputs = encoder_layer( 2025-08-14T21:40:20.8847328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.8847705Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.8848118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 323, in forward 2025-08-14T21:40:20.8848573Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:40:20.8848980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:20.8849353Z return self.act(input) 2025-08-14T21:40:20.8849469Z 2025-08-14T21:40:20.8849551Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8849761Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8849962Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8850174Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8850384Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8850583Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8850794Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8851000Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8851235Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.8851626Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.8851974Z return mod(**inputs) 2025-08-14T21:40:20.8852369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.8852744Z outputs = self.model( 2025-08-14T21:40:20.8853110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:40:20.8853498Z encoder_outputs = self.encoder( 2025-08-14T21:40:20.8853873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:40:20.8854259Z layer_outputs = encoder_layer( 2025-08-14T21:40:20.8854609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.8854980Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.8855357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:40:20.8855759Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:40:20.8856156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.8856560Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.8856995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:40:20.8857508Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:20.8857692Z 2025-08-14T21:40:20.8857804Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.8858159Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.8858532Z return mod(**inputs) 2025-08-14T21:40:20.8858921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.8859330Z outputs = self.model( 2025-08-14T21:40:20.8859828Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:40:20.8860273Z encoder_outputs = self.encoder( 2025-08-14T21:40:20.8860684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:40:20.8861097Z layer_outputs = encoder_layer( 2025-08-14T21:40:20.8861443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.8861814Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.8862206Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:40:20.8862605Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:40:20.8863009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.8863416Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.8863863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:40:20.8864317Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:40:20.8864488Z 2025-08-14T21:40:20.8864567Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8864782Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8865014Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.8865377Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.8865709Z return mod(**inputs) 2025-08-14T21:40:20.8866068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.8866451Z outputs = self.model( 2025-08-14T21:40:20.8866804Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:40:20.8867175Z encoder_outputs = self.encoder( 2025-08-14T21:40:20.8867536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:40:20.8867909Z layer_outputs = encoder_layer( 2025-08-14T21:40:20.8868255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.8868611Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.8868980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 323, in forward 2025-08-14T21:40:20.8869398Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:40:20.8869780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:20.8870113Z return self.act(input) 2025-08-14T21:40:20.8870221Z 2025-08-14T21:40:20.8870299Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8870511Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8870717Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8870913Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8871118Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8871320Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8871554Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8871759Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8871988Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.8872347Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.8872703Z return mod(**inputs) 2025-08-14T21:40:20.8873060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.8873429Z outputs = self.model( 2025-08-14T21:40:20.8873774Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:40:20.8874152Z encoder_outputs = self.encoder( 2025-08-14T21:40:20.8874519Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:40:20.8874889Z layer_outputs = encoder_layer( 2025-08-14T21:40:20.8875225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.8875578Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.8875953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:40:20.8876338Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:40:20.8876726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.8877119Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.8877548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:40:20.8878005Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:20.8878191Z 2025-08-14T21:40:20.8878296Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.8878648Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.8878964Z return mod(**inputs) 2025-08-14T21:40:20.8879308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.8879680Z outputs = self.model( 2025-08-14T21:40:20.8880031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:40:20.8880395Z encoder_outputs = self.encoder( 2025-08-14T21:40:20.8880766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:40:20.8881126Z layer_outputs = encoder_layer( 2025-08-14T21:40:20.8881457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.8881794Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.8882159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:40:20.8882541Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:40:20.8882914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.8883298Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.8883719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:40:20.8884169Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:40:20.8884328Z 2025-08-14T21:40:20.8884407Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8884617Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8884858Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.8885233Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.8885550Z return mod(**inputs) 2025-08-14T21:40:20.8885894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.8886296Z outputs = self.model( 2025-08-14T21:40:20.8886637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:40:20.8887003Z encoder_outputs = self.encoder( 2025-08-14T21:40:20.8887363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:40:20.8887732Z layer_outputs = encoder_layer( 2025-08-14T21:40:20.8888073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.8888431Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.8888806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 323, in forward 2025-08-14T21:40:20.8889218Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:40:20.8889598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:20.8889931Z return self.act(input) 2025-08-14T21:40:20.8890038Z 2025-08-14T21:40:20.8890123Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8890324Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8890529Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8890736Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8890932Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8891135Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8891338Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8891537Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8891772Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.8892134Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.8892456Z return mod(**inputs) 2025-08-14T21:40:20.8892804Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.8893174Z outputs = self.model( 2025-08-14T21:40:20.8893528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:40:20.8893897Z encoder_outputs = self.encoder( 2025-08-14T21:40:20.8894267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:40:20.8894642Z layer_outputs = encoder_layer( 2025-08-14T21:40:20.8894986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.8895334Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.8895707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:40:20.8896105Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:40:20.8896494Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.8896885Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.8897319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:40:20.8897804Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:20.8897993Z 2025-08-14T21:40:20.8898099Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.8898507Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.8898855Z return mod(**inputs) 2025-08-14T21:40:20.8899209Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.8899704Z outputs = self.model( 2025-08-14T21:40:20.8900070Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:40:20.8900459Z encoder_outputs = self.encoder( 2025-08-14T21:40:20.8900852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:40:20.8901270Z layer_outputs = encoder_layer( 2025-08-14T21:40:20.8901653Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.8902053Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.8902490Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:40:20.8902884Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:40:20.8903268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.8903667Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.8904101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:40:20.8904539Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:40:20.8904705Z 2025-08-14T21:40:20.8904785Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8904994Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8905228Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.8905580Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.8905945Z return mod(**inputs) 2025-08-14T21:40:20.8906306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.8906690Z outputs = self.model( 2025-08-14T21:40:20.8907054Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:40:20.8907438Z encoder_outputs = self.encoder( 2025-08-14T21:40:20.8907805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:40:20.8908171Z layer_outputs = encoder_layer( 2025-08-14T21:40:20.8908516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.8908871Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.8909244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 323, in forward 2025-08-14T21:40:20.8909662Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:40:20.8910039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:20.8910377Z return self.act(input) 2025-08-14T21:40:20.8910485Z 2025-08-14T21:40:20.8910562Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8910770Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8910973Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8911167Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8911368Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8911572Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8911775Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8911969Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8912200Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.8912599Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.8912921Z return mod(**inputs) 2025-08-14T21:40:20.8913279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.8913686Z outputs = self.model( 2025-08-14T21:40:20.8914031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:40:20.8914413Z encoder_outputs = self.encoder( 2025-08-14T21:40:20.8914780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:40:20.8915151Z layer_outputs = encoder_layer( 2025-08-14T21:40:20.8915486Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.8915844Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.8916225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:40:20.8916616Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:40:20.8917003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.8917401Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.8917835Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:40:20.8918292Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:20.8918481Z 2025-08-14T21:40:20.8918586Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.8918941Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.8919260Z return mod(**inputs) 2025-08-14T21:40:20.8919609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.8919990Z outputs = self.model( 2025-08-14T21:40:20.8920353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:40:20.8920744Z encoder_outputs = self.encoder( 2025-08-14T21:40:20.8921118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:40:20.8921499Z layer_outputs = encoder_layer( 2025-08-14T21:40:20.8921847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.8922206Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.8922590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:40:20.8922991Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:40:20.8923387Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.8923785Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.8924223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:40:20.8924678Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:40:20.8924839Z 2025-08-14T21:40:20.8924927Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8925134Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8925370Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.8925727Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.8926043Z return mod(**inputs) 2025-08-14T21:40:20.8926448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.8926836Z outputs = self.model( 2025-08-14T21:40:20.8927198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:40:20.8927641Z encoder_outputs = self.encoder( 2025-08-14T21:40:20.8928016Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:40:20.8928396Z layer_outputs = encoder_layer( 2025-08-14T21:40:20.8928748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.8929120Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.8929506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 323, in forward 2025-08-14T21:40:20.8929937Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:40:20.8930322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:20.8930662Z return self.act(input) 2025-08-14T21:40:20.8930776Z 2025-08-14T21:40:20.8930866Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8931070Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8931280Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8931489Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8931688Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8931895Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8932105Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8932308Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8932539Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.8932902Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.8933225Z return mod(**inputs) 2025-08-14T21:40:20.8933584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.8933965Z outputs = self.model( 2025-08-14T21:40:20.8934326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:40:20.8934707Z encoder_outputs = self.encoder( 2025-08-14T21:40:20.8935072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:40:20.8935458Z layer_outputs = encoder_layer( 2025-08-14T21:40:20.8935796Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.8936142Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.8936521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:40:20.8936910Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:40:20.8937296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.8937684Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.8938116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:40:20.8938580Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:20.8938757Z 2025-08-14T21:40:20.8938867Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.8939217Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.8939655Z return mod(**inputs) 2025-08-14T21:40:20.8940092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.8940494Z outputs = self.model( 2025-08-14T21:40:20.8940889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:40:20.8941320Z encoder_outputs = self.encoder( 2025-08-14T21:40:20.8941698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:40:20.8942265Z layer_outputs = encoder_layer( 2025-08-14T21:40:20.8942621Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.8942993Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.8943375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:40:20.8943782Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:40:20.8944190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.8944599Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.8945041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:40:20.8945506Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:40:20.8945671Z 2025-08-14T21:40:20.8945762Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8945976Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8946211Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.8946581Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.8946909Z return mod(**inputs) 2025-08-14T21:40:20.8947266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.8947651Z outputs = self.model( 2025-08-14T21:40:20.8948007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:40:20.8948401Z encoder_outputs = self.encoder( 2025-08-14T21:40:20.8948772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:40:20.8949158Z layer_outputs = encoder_layer( 2025-08-14T21:40:20.8949509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.8949867Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.8950253Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 323, in forward 2025-08-14T21:40:20.8950683Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:40:20.8951077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:20.8951408Z return self.act(input) 2025-08-14T21:40:20.8951528Z 2025-08-14T21:40:20.8951618Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8951829Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8952026Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8952229Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8952430Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8952625Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8952827Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8953035Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8953267Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.8953614Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.8953934Z return mod(**inputs) 2025-08-14T21:40:20.8954390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.8954764Z outputs = self.model( 2025-08-14T21:40:20.8955123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:40:20.8955587Z encoder_outputs = self.encoder( 2025-08-14T21:40:20.8955962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:40:20.8956344Z layer_outputs = encoder_layer( 2025-08-14T21:40:20.8956683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.8957036Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.8957401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:40:20.8957801Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:40:20.8958203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.8958606Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.8959043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:40:20.8959518Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:20.8959709Z 2025-08-14T21:40:20.8959815Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.8960179Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.8960499Z return mod(**inputs) 2025-08-14T21:40:20.8960860Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.8961244Z outputs = self.model( 2025-08-14T21:40:20.8961599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:40:20.8961989Z encoder_outputs = self.encoder( 2025-08-14T21:40:20.8962372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:40:20.8962759Z layer_outputs = encoder_layer( 2025-08-14T21:40:20.8963105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.8963473Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.8963858Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:40:20.8964256Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:40:20.8964661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.8965068Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.8965515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:40:20.8965965Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:40:20.8966136Z 2025-08-14T21:40:20.8966218Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8966435Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8966684Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.8967050Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.8967378Z return mod(**inputs) 2025-08-14T21:40:20.8967738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.8968114Z outputs = self.model( 2025-08-14T21:40:20.8968519Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:40:20.8968909Z encoder_outputs = self.encoder( 2025-08-14T21:40:20.8969292Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:40:20.8969698Z layer_outputs = encoder_layer( 2025-08-14T21:40:20.8970042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.8970397Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.8970765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 323, in forward 2025-08-14T21:40:20.8971183Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:40:20.8971560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:20.8971896Z return self.act(input) 2025-08-14T21:40:20.8972004Z 2025-08-14T21:40:20.8972082Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8972290Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8972494Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8972691Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8972892Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8973095Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8973290Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8973495Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8973723Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.8974074Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.8974384Z return mod(**inputs) 2025-08-14T21:40:20.8974735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.8975104Z outputs = self.model( 2025-08-14T21:40:20.8975450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:40:20.8975832Z encoder_outputs = self.encoder( 2025-08-14T21:40:20.8976202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:40:20.8976575Z layer_outputs = encoder_layer( 2025-08-14T21:40:20.8976906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.8977258Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.8977632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:40:20.8978022Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:40:20.8978404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.8978799Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.8979235Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:40:20.8979795Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:20.8980004Z 2025-08-14T21:40:20.8980117Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.8980518Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.8980851Z return mod(**inputs) 2025-08-14T21:40:20.8981219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.8981604Z outputs = self.model( 2025-08-14T21:40:20.8982034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:40:20.8982411Z encoder_outputs = self.encoder( 2025-08-14T21:40:20.8982781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:40:20.8983194Z layer_outputs = encoder_layer( 2025-08-14T21:40:20.8983535Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.8983881Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.8984261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:40:20.8984652Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:40:20.8985051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.8985444Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.8985877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:40:20.8986325Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:40:20.8986486Z 2025-08-14T21:40:20.8986565Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8986773Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8987010Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.8987360Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.8987675Z return mod(**inputs) 2025-08-14T21:40:20.8988028Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.8988406Z outputs = self.model( 2025-08-14T21:40:20.8988756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:40:20.8989128Z encoder_outputs = self.encoder( 2025-08-14T21:40:20.8989492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:40:20.8989866Z layer_outputs = encoder_layer( 2025-08-14T21:40:20.8990198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.8990548Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.8990923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 323, in forward 2025-08-14T21:40:20.8991327Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:40:20.8991700Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:20.8992029Z return self.act(input) 2025-08-14T21:40:20.8992136Z 2025-08-14T21:40:20.8992223Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8992421Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8992623Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8992823Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8993020Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8993223Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8993423Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8993624Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.8993844Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.8994198Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.8994537Z return mod(**inputs) 2025-08-14T21:40:20.8994872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.8995233Z outputs = self.model( 2025-08-14T21:40:20.8995620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:40:20.8995991Z encoder_outputs = self.encoder( 2025-08-14T21:40:20.8996346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:40:20.8996744Z layer_outputs = encoder_layer( 2025-08-14T21:40:20.8997077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.8997417Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.8997783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:40:20.8998169Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:40:20.8998551Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.8998939Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.8999361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:40:20.8999824Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:20.8999999Z 2025-08-14T21:40:20.9000107Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9000445Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9000754Z return mod(**inputs) 2025-08-14T21:40:20.9001100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9001455Z outputs = self.model( 2025-08-14T21:40:20.9001798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:40:20.9002170Z encoder_outputs = self.encoder( 2025-08-14T21:40:20.9002526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:40:20.9002879Z layer_outputs = encoder_layer( 2025-08-14T21:40:20.9003214Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9003561Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9003917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-08-14T21:40:20.9004299Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:40:20.9004672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9005055Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9005468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:40:20.9005899Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:40:20.9006052Z 2025-08-14T21:40:20.9006137Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9006334Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9006563Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9006906Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9007217Z return mod(**inputs) 2025-08-14T21:40:20.9007551Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9007910Z outputs = self.model( 2025-08-14T21:40:20.9008252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-08-14T21:40:20.9008612Z encoder_outputs = self.encoder( 2025-08-14T21:40:20.9009012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-08-14T21:40:20.9009378Z layer_outputs = encoder_layer( 2025-08-14T21:40:20.9009712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9010085Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9010449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 323, in forward 2025-08-14T21:40:20.9010858Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:40:20.9011227Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:20.9011548Z return self.act(input) 2025-08-14T21:40:20.9011661Z 2025-08-14T21:40:20.9011740Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9011945Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9012139Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9012339Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9012537Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9012725Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9012927Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9013124Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9013348Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9013710Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9014030Z return mod(**inputs) 2025-08-14T21:40:20.9014381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9014741Z outputs = self.model( 2025-08-14T21:40:20.9015095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9015471Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9015839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9016206Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9016545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9016891Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9017261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:40:20.9017663Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:40:20.9018071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9018478Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9018915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:40:20.9019386Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:20.9019655Z 2025-08-14T21:40:20.9019774Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9020143Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9020465Z return mod(**inputs) 2025-08-14T21:40:20.9020831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9021215Z outputs = self.model( 2025-08-14T21:40:20.9021573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9021957Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9022392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9022768Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9023100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9023491Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9023861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:40:20.9024249Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:40:20.9024642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9025035Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9025466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:40:20.9025905Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:40:20.9026072Z 2025-08-14T21:40:20.9026152Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9026362Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9026571Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9026766Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9026968Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9027169Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9027362Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9027563Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9027793Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9028139Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9028458Z return mod(**inputs) 2025-08-14T21:40:20.9028816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9029185Z outputs = self.model( 2025-08-14T21:40:20.9029536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9029915Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9030283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9030648Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9030986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9031340Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9031709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:40:20.9032122Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:40:20.9032518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9032909Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9033332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:40:20.9033796Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:20.9033981Z 2025-08-14T21:40:20.9034084Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9034438Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9034752Z return mod(**inputs) 2025-08-14T21:40:20.9035095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9035459Z outputs = self.model( 2025-08-14T21:40:20.9035859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9036221Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9036584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9036990Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9037314Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9037662Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9038029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:40:20.9038430Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:40:20.9038815Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9039200Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9039626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:40:20.9040059Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:40:20.9040220Z 2025-08-14T21:40:20.9040296Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9040500Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9040728Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9041063Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9041375Z return mod(**inputs) 2025-08-14T21:40:20.9041730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9042316Z outputs = self.model( 2025-08-14T21:40:20.9042668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9043056Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9043441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9043823Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9044181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9044555Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9044928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:40:20.9045339Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:40:20.9045720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:20.9046070Z return self.act(input) 2025-08-14T21:40:20.9046182Z 2025-08-14T21:40:20.9046261Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9046477Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9046684Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9046895Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9047097Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9047306Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9047512Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9047709Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9047948Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9048313Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9048634Z return mod(**inputs) 2025-08-14T21:40:20.9048996Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9049483Z outputs = self.model( 2025-08-14T21:40:20.9049848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9050222Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9050663Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9051052Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9051400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9051769Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9052157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:40:20.9052570Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:40:20.9052972Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9053380Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9053829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:40:20.9054315Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:20.9054500Z 2025-08-14T21:40:20.9054608Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9054973Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9055302Z return mod(**inputs) 2025-08-14T21:40:20.9055656Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9056046Z outputs = self.model( 2025-08-14T21:40:20.9056437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9056864Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9057235Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9057623Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9057974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9058328Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9058732Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:40:20.9059143Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:40:20.9059622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9060039Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9060514Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:40:20.9060989Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:40:20.9061153Z 2025-08-14T21:40:20.9061248Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9061457Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9061672Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9061883Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9062085Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9062296Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9062506Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9062705Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9062942Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9063305Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9063680Z return mod(**inputs) 2025-08-14T21:40:20.9064040Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9064425Z outputs = self.model( 2025-08-14T21:40:20.9064823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9065201Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9065588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9065962Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9066301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9066646Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9067022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:40:20.9067428Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:40:20.9067823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9068218Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9068647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:40:20.9069112Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:20.9069294Z 2025-08-14T21:40:20.9069397Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9069751Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9070065Z return mod(**inputs) 2025-08-14T21:40:20.9070414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9070774Z outputs = self.model( 2025-08-14T21:40:20.9071123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9071500Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9071861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9072236Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9072577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9072929Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9073293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:40:20.9073697Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:40:20.9074103Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9074497Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9074918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:40:20.9075364Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:40:20.9075521Z 2025-08-14T21:40:20.9075607Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9075807Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9076041Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9076396Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9076715Z return mod(**inputs) 2025-08-14T21:40:20.9077095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9077468Z outputs = self.model( 2025-08-14T21:40:20.9077822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9078243Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9078611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9078981Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9079321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9079672Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9080046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:40:20.9080468Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:40:20.9080831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:20.9081158Z return self.act(input) 2025-08-14T21:40:20.9081268Z 2025-08-14T21:40:20.9081344Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9081549Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9081741Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9081938Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9082135Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9082325Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9082523Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9082719Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9082933Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9083277Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9083586Z return mod(**inputs) 2025-08-14T21:40:20.9083929Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9084282Z outputs = self.model( 2025-08-14T21:40:20.9084622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9084988Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9085338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9085696Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9086024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9086362Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9086721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:40:20.9087115Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:40:20.9087497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9087880Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9088291Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:40:20.9088743Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:20.9088915Z 2025-08-14T21:40:20.9089023Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9089359Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9089669Z return mod(**inputs) 2025-08-14T21:40:20.9090010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9090412Z outputs = self.model( 2025-08-14T21:40:20.9090748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9091114Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9091504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9091867Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9092196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9092548Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9092919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:40:20.9093307Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:40:20.9093699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9094086Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9094508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:40:20.9094948Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:40:20.9095112Z 2025-08-14T21:40:20.9095192Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9095399Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9095597Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9095801Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9096005Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9096208Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9096404Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9096609Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9096839Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9097188Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9097509Z return mod(**inputs) 2025-08-14T21:40:20.9097861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9098223Z outputs = self.model( 2025-08-14T21:40:20.9098572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9098940Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9099335Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9099790Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9100135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9100497Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9100879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:40:20.9101321Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:40:20.9101730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9102128Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9102551Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:40:20.9103017Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:20.9103205Z 2025-08-14T21:40:20.9103308Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9103663Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9104013Z return mod(**inputs) 2025-08-14T21:40:20.9104370Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9104744Z outputs = self.model( 2025-08-14T21:40:20.9105122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9105499Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9105866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9106234Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9106568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9106922Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9107298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:40:20.9107702Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:40:20.9108096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9108494Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9108928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:40:20.9109362Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:40:20.9109526Z 2025-08-14T21:40:20.9109604Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9109815Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9110046Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9110390Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9110707Z return mod(**inputs) 2025-08-14T21:40:20.9111057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9111421Z outputs = self.model( 2025-08-14T21:40:20.9111772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9112150Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9112516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9112887Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9113228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9113582Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9113954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:40:20.9114359Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:40:20.9114728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:20.9115056Z return self.act(input) 2025-08-14T21:40:20.9115161Z 2025-08-14T21:40:20.9115236Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9115440Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9115640Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9115713Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9115784Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9115866Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9115938Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9116017Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9116118Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9116344Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9116418Z return mod(**inputs) 2025-08-14T21:40:20.9116656Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9116756Z outputs = self.model( 2025-08-14T21:40:20.9117001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9117071Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9117312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9117381Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9117589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9117674Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9117911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:40:20.9118007Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:40:20.9118246Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9118342Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9118625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:40:20.9118747Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:20.9118750Z 2025-08-14T21:40:20.9118850Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9119045Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9119107Z return mod(**inputs) 2025-08-14T21:40:20.9119353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9119419Z outputs = self.model( 2025-08-14T21:40:20.9119652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9119736Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9119971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9120040Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9120257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9120333Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9120573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:40:20.9120669Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:40:20.9120899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9120999Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9121272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:40:20.9121383Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:40:20.9121386Z 2025-08-14T21:40:20.9121464Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9121538Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9121622Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9121694Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9121765Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9121848Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9121920Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9122029Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9122141Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9122333Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9122454Z return mod(**inputs) 2025-08-14T21:40:20.9122699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9122765Z outputs = self.model( 2025-08-14T21:40:20.9123020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9123093Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9123342Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9123412Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9123628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9123717Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9123956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:40:20.9124062Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:40:20.9124313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9124407Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9124697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:40:20.9124821Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:20.9124824Z 2025-08-14T21:40:20.9124922Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9125129Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9125192Z return mod(**inputs) 2025-08-14T21:40:20.9125447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9125515Z outputs = self.model( 2025-08-14T21:40:20.9125756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9125834Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9126073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9126143Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9126367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9126446Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9126698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:40:20.9126800Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:40:20.9127042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9127141Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9127422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:40:20.9127524Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:40:20.9127535Z 2025-08-14T21:40:20.9127611Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9127687Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9127797Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9128028Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9128094Z return mod(**inputs) 2025-08-14T21:40:20.9128341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9128443Z outputs = self.model( 2025-08-14T21:40:20.9128678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9128759Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9128996Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9129073Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9129283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9129361Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9129615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:40:20.9129733Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:40:20.9129963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:20.9130028Z return self.act(input) 2025-08-14T21:40:20.9130032Z 2025-08-14T21:40:20.9130107Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9130187Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9130259Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9130331Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9130414Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9130484Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9130557Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9130638Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9130740Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9130937Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9130999Z return mod(**inputs) 2025-08-14T21:40:20.9131236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9131308Z outputs = self.model( 2025-08-14T21:40:20.9131544Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9131614Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9131856Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9131926Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9132145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9132222Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9132456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:40:20.9132562Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:40:20.9132796Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9132898Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9133172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:40:20.9133294Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:20.9133297Z 2025-08-14T21:40:20.9133406Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9133643Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9133709Z return mod(**inputs) 2025-08-14T21:40:20.9133957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9134056Z outputs = self.model( 2025-08-14T21:40:20.9134297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9134369Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9134605Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9134685Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9134894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9134977Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9135215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:40:20.9135310Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:40:20.9135550Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9135642Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9135915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:40:20.9136026Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:40:20.9136030Z 2025-08-14T21:40:20.9136105Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9136188Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9136260Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9136331Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9136411Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9136486Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9136558Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9136636Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9136733Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9136933Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9136996Z return mod(**inputs) 2025-08-14T21:40:20.9137233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9137307Z outputs = self.model( 2025-08-14T21:40:20.9137541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9137612Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9137856Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9137925Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9138139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9138218Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9138451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:40:20.9138563Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:40:20.9138801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9138894Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9139181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:40:20.9139345Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:20.9139350Z 2025-08-14T21:40:20.9139469Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9139767Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9139913Z return mod(**inputs) 2025-08-14T21:40:20.9140190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9140263Z outputs = self.model( 2025-08-14T21:40:20.9140598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9140673Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9140924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9141005Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9141238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9141323Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9141592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:40:20.9141711Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:40:20.9142189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9142297Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9142604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:40:20.9142726Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:40:20.9142730Z 2025-08-14T21:40:20.9142814Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9142908Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9143018Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9143231Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9143314Z return mod(**inputs) 2025-08-14T21:40:20.9143588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9143661Z outputs = self.model( 2025-08-14T21:40:20.9143946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9144024Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9144352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9144435Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9144681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9144774Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9145044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:40:20.9145172Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:40:20.9145405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:20.9145479Z return self.act(input) 2025-08-14T21:40:20.9145483Z 2025-08-14T21:40:20.9145576Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9145657Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9145736Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9145823Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9145902Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9145982Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9146167Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9146249Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9146357Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9146574Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9146785Z return mod(**inputs) 2025-08-14T21:40:20.9147054Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9147127Z outputs = self.model( 2025-08-14T21:40:20.9147391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9147477Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9147738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9147824Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9148056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9148140Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9148418Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:40:20.9148524Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:40:20.9148781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9148894Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9149198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:40:20.9149341Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:20.9149344Z 2025-08-14T21:40:20.9149456Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9149666Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9149745Z return mod(**inputs) 2025-08-14T21:40:20.9150015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9150096Z outputs = self.model( 2025-08-14T21:40:20.9150362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9150434Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9150679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9150750Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9150964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9151052Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9151292Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:40:20.9151396Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:40:20.9151636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9151729Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9152017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:40:20.9152122Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:40:20.9152125Z 2025-08-14T21:40:20.9152210Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9152286Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9152396Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9152481Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9152554Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9152635Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9152717Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9152851Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9152952Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9153155Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9153220Z return mod(**inputs) 2025-08-14T21:40:20.9153474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9153540Z outputs = self.model( 2025-08-14T21:40:20.9153781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9153864Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9154104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9154174Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9154403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9154479Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9154728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:40:20.9154837Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:40:20.9155074Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9155174Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9155459Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:40:20.9155592Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:20.9155596Z 2025-08-14T21:40:20.9155697Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9155894Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9155965Z return mod(**inputs) 2025-08-14T21:40:20.9156208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9156274Z outputs = self.model( 2025-08-14T21:40:20.9156527Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9156601Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9156853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9156923Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9157138Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9157227Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9157469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:40:20.9157574Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:40:20.9157822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9157916Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9158209Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:40:20.9158347Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:40:20.9158351Z 2025-08-14T21:40:20.9158429Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9158512Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9158613Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9158846Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9158909Z return mod(**inputs) 2025-08-14T21:40:20.9159151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9159223Z outputs = self.model( 2025-08-14T21:40:20.9159462Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9159534Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9159781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9159854Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9160074Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9160151Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9160388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:40:20.9160510Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:40:20.9160713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:20.9160781Z return self.act(input) 2025-08-14T21:40:20.9160792Z 2025-08-14T21:40:20.9160868Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9160943Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9161024Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9161098Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9161173Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9161256Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9161329Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9161402Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9161514Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9161706Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9161778Z return mod(**inputs) 2025-08-14T21:40:20.9162015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9162081Z outputs = self.model( 2025-08-14T21:40:20.9162329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9162401Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9162641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9162722Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9162934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9163022Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9163258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:40:20.9163355Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:40:20.9163599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9163692Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9163971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:40:20.9164137Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:20.9164142Z 2025-08-14T21:40:20.9164244Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9164448Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9164548Z return mod(**inputs) 2025-08-14T21:40:20.9164797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9164872Z outputs = self.model( 2025-08-14T21:40:20.9165118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9165199Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9165447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9165518Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9165755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9165833Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9166076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:40:20.9166183Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:40:20.9166426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9166525Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9166810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:40:20.9166913Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:40:20.9166918Z 2025-08-14T21:40:20.9167006Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9167082Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9167163Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9167237Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9167313Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9167394Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9167468Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9167541Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9167651Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9167859Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9167921Z return mod(**inputs) 2025-08-14T21:40:20.9168170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9168233Z outputs = self.model( 2025-08-14T21:40:20.9168487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9168558Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9168803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9168887Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9169105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9169183Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9169442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:40:20.9169547Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:40:20.9169798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9169944Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9170225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:40:20.9170391Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:20.9170394Z 2025-08-14T21:40:20.9170496Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9170714Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9170775Z return mod(**inputs) 2025-08-14T21:40:20.9171011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9171085Z outputs = self.model( 2025-08-14T21:40:20.9171325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9171397Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9171640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9171708Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9171926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9172000Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9172232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:40:20.9172339Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:40:20.9172568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9172667Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9172942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:40:20.9173042Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:40:20.9173046Z 2025-08-14T21:40:20.9173128Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9173206Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9173306Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9173506Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9173570Z return mod(**inputs) 2025-08-14T21:40:20.9173817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9173883Z outputs = self.model( 2025-08-14T21:40:20.9174126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9174206Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9174448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9174518Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9174736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9174816Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9175060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:40:20.9175173Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:40:20.9175377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:20.9175453Z return self.act(input) 2025-08-14T21:40:20.9175456Z 2025-08-14T21:40:20.9175532Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9175647Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9175724Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9175798Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9175879Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9175953Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9176060Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9176142Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9176244Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9176437Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9176513Z return mod(**inputs) 2025-08-14T21:40:20.9176756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9176831Z outputs = self.model( 2025-08-14T21:40:20.9177075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9177149Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9177403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9177478Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9177698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9177786Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9178031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:40:20.9178139Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:40:20.9178383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9178478Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9178777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:40:20.9178908Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:20.9178914Z 2025-08-14T21:40:20.9179027Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9179225Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9179292Z return mod(**inputs) 2025-08-14T21:40:20.9179625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9179698Z outputs = self.model( 2025-08-14T21:40:20.9179947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9180033Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9180285Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9180368Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9180588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9180670Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9180933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:40:20.9181030Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:40:20.9181267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9181369Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9181652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:40:20.9181807Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:40:20.9181811Z 2025-08-14T21:40:20.9181890Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9181964Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9182081Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9182285Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9182367Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9182439Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9182512Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9182593Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9182695Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9182894Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9182969Z return mod(**inputs) 2025-08-14T21:40:20.9183221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9183299Z outputs = self.model( 2025-08-14T21:40:20.9183544Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9183623Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9183886Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9183957Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9184169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9184255Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9184493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:40:20.9184606Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:40:20.9184843Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9184937Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9185224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:40:20.9185351Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:20.9185354Z 2025-08-14T21:40:20.9185460Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9185652Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9185716Z return mod(**inputs) 2025-08-14T21:40:20.9185963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9186028Z outputs = self.model( 2025-08-14T21:40:20.9186271Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9186350Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9186588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9186667Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9186878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9186954Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9187199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:40:20.9187302Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:40:20.9187539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9187677Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9187969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:40:20.9188080Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:40:20.9188124Z 2025-08-14T21:40:20.9188204Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9188280Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9188388Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9188581Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9188653Z return mod(**inputs) 2025-08-14T21:40:20.9188895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9188961Z outputs = self.model( 2025-08-14T21:40:20.9189211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9189285Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9189522Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9189604Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9189819Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9189906Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9190144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:40:20.9190260Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:40:20.9190471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:20.9190538Z return self.act(input) 2025-08-14T21:40:20.9190544Z 2025-08-14T21:40:20.9190622Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9190704Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9190779Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9190864Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9190938Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9191010Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9191093Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9191168Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9191268Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9191471Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9191534Z return mod(**inputs) 2025-08-14T21:40:20.9191779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9191854Z outputs = self.model( 2025-08-14T21:40:20.9192094Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9192174Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9192417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9192487Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9192712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9192789Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9193035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:40:20.9193132Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:40:20.9193406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9193508Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9193789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:40:20.9193945Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:20.9193955Z 2025-08-14T21:40:20.9194055Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9194247Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9194317Z return mod(**inputs) 2025-08-14T21:40:20.9194556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9194621Z outputs = self.model( 2025-08-14T21:40:20.9194869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9194940Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9195182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9195256Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9195466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9195550Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9195785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:40:20.9195881Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:40:20.9196124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9196215Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9196505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:40:20.9196611Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:40:20.9196614Z 2025-08-14T21:40:20.9196694Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9196778Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9196852Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9196926Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9197007Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9197079Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9197158Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9197231Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9197332Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9197530Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9197597Z return mod(**inputs) 2025-08-14T21:40:20.9197838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9197913Z outputs = self.model( 2025-08-14T21:40:20.9198152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9198229Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9198470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9198540Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9198760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9198842Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9199826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:40:20.9199954Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:40:20.9200193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9200330Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9200608Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:40:20.9200733Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:20.9200737Z 2025-08-14T21:40:20.9200842Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9201034Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9201106Z return mod(**inputs) 2025-08-14T21:40:20.9201347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9201413Z outputs = self.model( 2025-08-14T21:40:20.9201658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9201734Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9201971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9202050Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9202261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9202346Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9202583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:40:20.9202688Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:40:20.9202937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9203028Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9203313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:40:20.9203420Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:40:20.9203424Z 2025-08-14T21:40:20.9203499Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9203583Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9203683Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9203876Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9203948Z return mod(**inputs) 2025-08-14T21:40:20.9204193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9204268Z outputs = self.model( 2025-08-14T21:40:20.9204508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9204583Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9204831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9204901Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9205112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9205198Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9205433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:40:20.9205555Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:40:20.9205796Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:20.9205865Z return self.act(input) 2025-08-14T21:40:20.9205869Z 2025-08-14T21:40:20.9205953Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9206061Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9206136Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9206218Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9206292Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9206372Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9206445Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9206519Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9206630Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9206825Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9206889Z return mod(**inputs) 2025-08-14T21:40:20.9207142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9207216Z outputs = self.model( 2025-08-14T21:40:20.9207463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9207538Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9207827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9207948Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9208171Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9208248Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9208492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:40:20.9208592Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:40:20.9208836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9208930Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9209214Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:40:20.9209346Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:20.9209350Z 2025-08-14T21:40:20.9209449Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9209655Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9209719Z return mod(**inputs) 2025-08-14T21:40:20.9209960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9210043Z outputs = self.model( 2025-08-14T21:40:20.9210279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9210351Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9210597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9210665Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9210880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9210956Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9211186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:40:20.9211288Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:40:20.9211560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9211654Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9211937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:40:20.9212079Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:40:20.9212083Z 2025-08-14T21:40:20.9212164Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9212236Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9212308Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9212386Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9212458Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9212531Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9212609Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9212679Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9212782Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9212971Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9213033Z return mod(**inputs) 2025-08-14T21:40:20.9213274Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9213341Z outputs = self.model( 2025-08-14T21:40:20.9213572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9213648Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9213881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9213954Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9214159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9214236Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9214473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:40:20.9214573Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:40:20.9214813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9214904Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9215175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:40:20.9215305Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:20.9215308Z 2025-08-14T21:40:20.9215407Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9215598Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9215672Z return mod(**inputs) 2025-08-14T21:40:20.9215911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9215983Z outputs = self.model( 2025-08-14T21:40:20.9216223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9216294Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9216540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9216611Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9216823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9216907Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9217185Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:40:20.9217299Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:40:20.9217540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9217662Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9217952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:40:20.9218058Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:40:20.9218062Z 2025-08-14T21:40:20.9218146Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9218221Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9218320Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9218520Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9218584Z return mod(**inputs) 2025-08-14T21:40:20.9218833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9218907Z outputs = self.model( 2025-08-14T21:40:20.9219145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9219231Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9219561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9219647Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9219890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9219975Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9220237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:40:20.9220372Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:40:20.9220602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:20.9220697Z return self.act(input) 2025-08-14T21:40:20.9220700Z 2025-08-14T21:40:20.9220775Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9220850Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9220931Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9221005Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9221078Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9221159Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9221231Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9221315Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9221413Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9221604Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9221677Z return mod(**inputs) 2025-08-14T21:40:20.9221913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9221982Z outputs = self.model( 2025-08-14T21:40:20.9222229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9222299Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9222543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9222611Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9222818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9222902Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9223173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:40:20.9223272Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:40:20.9223509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9223632Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9223916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:40:20.9224036Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:20.9224039Z 2025-08-14T21:40:20.9224139Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9224334Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9224397Z return mod(**inputs) 2025-08-14T21:40:20.9224643Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9224709Z outputs = self.model( 2025-08-14T21:40:20.9224945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9225026Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9225259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9225329Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9225545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9225620Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9225863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:40:20.9225960Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:40:20.9226192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9226288Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9226565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:40:20.9226672Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:40:20.9226675Z 2025-08-14T21:40:20.9226749Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9226821Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9226902Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9226973Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9227045Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9227123Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9227193Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9227267Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9227372Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9227562Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9227634Z return mod(**inputs) 2025-08-14T21:40:20.9227869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9227936Z outputs = self.model( 2025-08-14T21:40:20.9228177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9228248Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9228482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9228557Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9228795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9228880Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9229113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:40:20.9229244Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:40:20.9229483Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9229571Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9229849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:40:20.9229969Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:20.9229973Z 2025-08-14T21:40:20.9230069Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9230266Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9230328Z return mod(**inputs) 2025-08-14T21:40:20.9230563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9230639Z outputs = self.model( 2025-08-14T21:40:20.9230874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9230952Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9231184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9231252Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9231469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9231548Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9231787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:40:20.9231890Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:40:20.9232123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9232222Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9232493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:40:20.9232594Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:40:20.9232605Z 2025-08-14T21:40:20.9232681Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9232756Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9232861Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9233053Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9233116Z return mod(**inputs) 2025-08-14T21:40:20.9233359Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9233426Z outputs = self.model( 2025-08-14T21:40:20.9233664Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9233744Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9233976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9234054Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9234261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9234337Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9234639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:40:20.9234753Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:40:20.9234997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:20.9235063Z return self.act(input) 2025-08-14T21:40:20.9235066Z 2025-08-14T21:40:20.9235140Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9235217Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9235290Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9235360Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9235438Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9235509Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9235581Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9235661Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9235759Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9235957Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9236019Z return mod(**inputs) 2025-08-14T21:40:20.9236264Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9236337Z outputs = self.model( 2025-08-14T21:40:20.9236567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9236639Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9236881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9236951Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9237168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9237245Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9237478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:40:20.9237581Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:40:20.9237814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9237913Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9238184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:40:20.9238308Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:20.9238311Z 2025-08-14T21:40:20.9238417Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9238607Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9238670Z return mod(**inputs) 2025-08-14T21:40:20.9238910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9238977Z outputs = self.model( 2025-08-14T21:40:20.9239219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9239288Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9239523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9239600Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9239805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9239879Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9240146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-08-14T21:40:20.9240242Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:40:20.9240484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9240614Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9240885Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:40:20.9240993Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:40:20.9240996Z 2025-08-14T21:40:20.9241072Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9241153Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9241225Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9241296Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9241374Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9241447Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9241519Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9241599Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9241695Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9242028Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9242105Z return mod(**inputs) 2025-08-14T21:40:20.9242341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9242414Z outputs = self.model( 2025-08-14T21:40:20.9242649Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9242720Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9242969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9243040Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9243258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9243339Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9243571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:40:20.9243681Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:40:20.9243912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9244004Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9244286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:40:20.9244412Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:20.9244416Z 2025-08-14T21:40:20.9244522Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9244712Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9244779Z return mod(**inputs) 2025-08-14T21:40:20.9245026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9245091Z outputs = self.model( 2025-08-14T21:40:20.9245337Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9245410Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9245693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9245770Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9246073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9246155Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9246401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-08-14T21:40:20.9246558Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:40:20.9246799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-08-14T21:40:20.9246890Z attn_output, attn_weights = attention_interface( 2025-08-14T21:40:20.9247165Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:40:20.9247277Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:40:20.9247280Z 2025-08-14T21:40:20.9247357Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9247436Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9247545Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9247735Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9247815Z return mod(**inputs) 2025-08-14T21:40:20.9248059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-08-14T21:40:20.9248124Z outputs = self.model( 2025-08-14T21:40:20.9248364Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-08-14T21:40:20.9248436Z decoder_outputs = self.decoder( 2025-08-14T21:40:20.9248681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-08-14T21:40:20.9248750Z layer_outputs = decoder_layer( 2025-08-14T21:40:20.9248963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:20.9249049Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:20.9249280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 445, in forward 2025-08-14T21:40:20.9249395Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:40:20.9249602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:20.9249668Z return self.act(input) 2025-08-14T21:40:20.9249671Z 2025-08-14T21:40:20.9249754Z cudagraph partition due to non gpu ops 2025-08-14T21:40:20.9249851Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9250038Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9250109Z return mod(**inputs) 2025-08-14T21:40:20.9250345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1490, in forward 2025-08-14T21:40:20.9250423Z lm_logits = self.lm_head(outputs[0]) 2025-08-14T21:40:20.9250433Z 2025-08-14T21:40:20.9250528Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:20.9250716Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:20.9250786Z return mod(**inputs) 2025-08-14T21:40:20.9251022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1497, in forward 2025-08-14T21:40:20.9251181Z masked_lm_loss = loss_fct(lm_logits.view(-1, self.config.vocab_size), labels.view(-1)) 2025-08-14T21:40:20.9251185Z 2025-08-14T21:40:35.1557910Z Compilation time (from dynamo_timed): 32.735290655 2025-08-14T21:40:35.1695304Z pass 2025-08-14T21:40:35.1695794Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:40:35.1697098Z TIMING: _recursive_pre_grad_passes:0.08912 _recursive_joint_graph_passes:1.17415 _recursive_post_grad_passes:0.17381 async_compile.wait:0.86695 code_gen:12.20428 inductor_compile:16.12374 backend_compile:26.88181 gc:0.00045 entire_frame_compile:32.73529 total_wall_time:32.73529 2025-08-14T21:40:35.1698403Z STATS: call_* op count: 980 | FakeTensorMode.__torch_dispatch__:63398 | FakeTensor.__torch_dispatch__:9772 | ProxyTorchDispatchMode.__torch_dispatch__:13946 2025-08-14T21:40:35.1698948Z Dynamo produced 1 graphs covering 980 ops with 0 graph breaks (0 unique) 2025-08-14T21:40:41.4905664Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:40:41.4906720Z from pkg_resources import resource_filename 2025-08-14T21:40:42.0923131Z 2025-08-14T21:40:43.5254264Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:40:43.5257676Z loading model: 0it [00:01, ?it/s] 2025-08-14T21:40:43.5268594Z cpu eval BertForMaskedLM 2025-08-14T21:40:44.0335673Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:40:44.2781087Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:40:44.6278729Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:40:54.3489371Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3490205Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3490511Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3490756Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3490998Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3491230Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3491500Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3491733Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3491984Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3492214Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3492454Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3492674Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3492898Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3493119Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3493334Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3493558Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3493782Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3494010Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3494310Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3494576Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:54.3494994Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:54.3495363Z return mod(**inputs) 2025-08-14T21:40:54.3495885Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:54.3496364Z outputs = self.bert( 2025-08-14T21:40:54.3496769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:54.3497216Z encoder_outputs = self.encoder( 2025-08-14T21:40:54.3497662Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:54.3498084Z layer_outputs = layer_module( 2025-08-14T21:40:54.3498484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:54.3498903Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:54.3499942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:40:54.3500424Z self_attention_outputs = self.attention( 2025-08-14T21:40:54.3500878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:54.3501397Z return func(*args, **kwargs) 2025-08-14T21:40:54.3501803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:40:54.3502236Z self_outputs = self.self( 2025-08-14T21:40:54.3502649Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:54.3503066Z return func(*args, **kwargs) 2025-08-14T21:40:54.3503475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:40:54.3503970Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:54.3504182Z 2025-08-14T21:40:54.3504282Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3504509Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3504776Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:54.3505184Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:54.3505539Z return mod(**inputs) 2025-08-14T21:40:54.3505940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:54.3506355Z outputs = self.bert( 2025-08-14T21:40:54.3506745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:54.3507171Z encoder_outputs = self.encoder( 2025-08-14T21:40:54.3507585Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:54.3508023Z layer_outputs = layer_module( 2025-08-14T21:40:54.3508398Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:54.3508810Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:54.3509247Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:40:54.3509690Z layer_output = apply_chunking_to_forward( 2025-08-14T21:40:54.3510129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:40:54.3510583Z return forward_fn(*input_tensors) 2025-08-14T21:40:54.3511029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:40:54.3511546Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:40:54.3512021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:40:54.3512489Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:40:54.3512897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:54.3513273Z return self.act(input) 2025-08-14T21:40:54.3513403Z 2025-08-14T21:40:54.3513490Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3513717Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3513941Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3514154Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3514372Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3514591Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3514804Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3515023Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3515323Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:54.3515713Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:54.3516072Z return mod(**inputs) 2025-08-14T21:40:54.3516467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:54.3516916Z outputs = self.bert( 2025-08-14T21:40:54.3517294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:54.3517707Z encoder_outputs = self.encoder( 2025-08-14T21:40:54.3518112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:54.3518511Z layer_outputs = layer_module( 2025-08-14T21:40:54.3518881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:54.3519275Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:54.3519690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:40:54.3520112Z self_attention_outputs = self.attention( 2025-08-14T21:40:54.3520530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:54.3520929Z return func(*args, **kwargs) 2025-08-14T21:40:54.3521325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:40:54.3521728Z self_outputs = self.self( 2025-08-14T21:40:54.3522112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:54.3522510Z return func(*args, **kwargs) 2025-08-14T21:40:54.3522897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:40:54.3523363Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:54.3523565Z 2025-08-14T21:40:54.3523650Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3523884Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3524133Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:54.3524527Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:54.3524886Z return mod(**inputs) 2025-08-14T21:40:54.3525269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:54.3525677Z outputs = self.bert( 2025-08-14T21:40:54.3526061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:54.3526484Z encoder_outputs = self.encoder( 2025-08-14T21:40:54.3526881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:54.3527285Z layer_outputs = layer_module( 2025-08-14T21:40:54.3527651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:54.3528035Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:54.3528439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:40:54.3528855Z layer_output = apply_chunking_to_forward( 2025-08-14T21:40:54.3529284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:40:54.3529698Z return forward_fn(*input_tensors) 2025-08-14T21:40:54.3530133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:40:54.3530652Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:40:54.3531102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:40:54.3531584Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:40:54.3531999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:54.3532373Z return self.act(input) 2025-08-14T21:40:54.3532493Z 2025-08-14T21:40:54.3532578Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3532808Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3533034Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3533256Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3533469Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3533689Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3533909Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3534128Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3534388Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:54.3534782Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:54.3535133Z return mod(**inputs) 2025-08-14T21:40:54.3535525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:54.3535936Z outputs = self.bert( 2025-08-14T21:40:54.3536329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:54.3536741Z encoder_outputs = self.encoder( 2025-08-14T21:40:54.3537159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:54.3537587Z layer_outputs = layer_module( 2025-08-14T21:40:54.3537967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:54.3538370Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:54.3538798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:40:54.3539234Z self_attention_outputs = self.attention( 2025-08-14T21:40:54.3539796Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:54.3540232Z return func(*args, **kwargs) 2025-08-14T21:40:54.3540641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:40:54.3541057Z self_outputs = self.self( 2025-08-14T21:40:54.3541467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:54.3542181Z return func(*args, **kwargs) 2025-08-14T21:40:54.3542600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:40:54.3543086Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:54.3543307Z 2025-08-14T21:40:54.3543399Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3543639Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3543897Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:54.3544298Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:54.3544665Z return mod(**inputs) 2025-08-14T21:40:54.3545070Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:54.3545495Z outputs = self.bert( 2025-08-14T21:40:54.3546021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:54.3546480Z encoder_outputs = self.encoder( 2025-08-14T21:40:54.3546923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:54.3547433Z layer_outputs = layer_module( 2025-08-14T21:40:54.3547862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:54.3548275Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:54.3548736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:40:54.3549216Z layer_output = apply_chunking_to_forward( 2025-08-14T21:40:54.3549675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:40:54.3550159Z return forward_fn(*input_tensors) 2025-08-14T21:40:54.3550641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:40:54.3551158Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:40:54.3551665Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:40:54.3552164Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:40:54.3552599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:54.3553005Z return self.act(input) 2025-08-14T21:40:54.3553138Z 2025-08-14T21:40:54.3553229Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3553466Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3553692Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3553919Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3554147Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3554370Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3554595Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3554819Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3555075Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:54.3555483Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:54.3555842Z return mod(**inputs) 2025-08-14T21:40:54.3556244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:54.3556671Z outputs = self.bert( 2025-08-14T21:40:54.3557095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:54.3557530Z encoder_outputs = self.encoder( 2025-08-14T21:40:54.3557932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:54.3558359Z layer_outputs = layer_module( 2025-08-14T21:40:54.3558734Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:54.3559125Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:54.3559542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:40:54.3559964Z self_attention_outputs = self.attention( 2025-08-14T21:40:54.3560378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:54.3560782Z return func(*args, **kwargs) 2025-08-14T21:40:54.3561184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:40:54.3561598Z self_outputs = self.self( 2025-08-14T21:40:54.3562074Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:54.3562471Z return func(*args, **kwargs) 2025-08-14T21:40:54.3562875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:40:54.3563379Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:54.3563576Z 2025-08-14T21:40:54.3563668Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3563890Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3564148Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:54.3564536Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:54.3564883Z return mod(**inputs) 2025-08-14T21:40:54.3565273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:54.3565679Z outputs = self.bert( 2025-08-14T21:40:54.3566060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:54.3566466Z encoder_outputs = self.encoder( 2025-08-14T21:40:54.3566882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:54.3567306Z layer_outputs = layer_module( 2025-08-14T21:40:54.3567681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:54.3568180Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:54.3568604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:40:54.3569045Z layer_output = apply_chunking_to_forward( 2025-08-14T21:40:54.3569472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:40:54.3569902Z return forward_fn(*input_tensors) 2025-08-14T21:40:54.3570358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:40:54.3570871Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:40:54.3571339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:40:54.3571799Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:40:54.3572216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:54.3572592Z return self.act(input) 2025-08-14T21:40:54.3572719Z 2025-08-14T21:40:54.3572805Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3573032Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3573256Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3573480Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3573702Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3573932Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3574150Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3574380Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3574640Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:54.3575037Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:54.3575395Z return mod(**inputs) 2025-08-14T21:40:54.3575796Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:54.3576209Z outputs = self.bert( 2025-08-14T21:40:54.3576595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:54.3577015Z encoder_outputs = self.encoder( 2025-08-14T21:40:54.3577472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:54.3577890Z layer_outputs = layer_module( 2025-08-14T21:40:54.3578277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:54.3578719Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:54.3579146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:40:54.3579679Z self_attention_outputs = self.attention( 2025-08-14T21:40:54.3580107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:54.3580522Z return func(*args, **kwargs) 2025-08-14T21:40:54.3580924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:40:54.3581350Z self_outputs = self.self( 2025-08-14T21:40:54.3581756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:54.3582171Z return func(*args, **kwargs) 2025-08-14T21:40:54.3582575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:40:54.3583057Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:54.3583259Z 2025-08-14T21:40:54.3583358Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3583589Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3583844Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:54.3584243Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:54.3584600Z return mod(**inputs) 2025-08-14T21:40:54.3584995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:54.3585414Z outputs = self.bert( 2025-08-14T21:40:54.3585811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:54.3586241Z encoder_outputs = self.encoder( 2025-08-14T21:40:54.3586650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:54.3587071Z layer_outputs = layer_module( 2025-08-14T21:40:54.3587457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:54.3587850Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:54.3588278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:40:54.3588711Z layer_output = apply_chunking_to_forward( 2025-08-14T21:40:54.3589163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:40:54.3589597Z return forward_fn(*input_tensors) 2025-08-14T21:40:54.3590048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:40:54.3590554Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:40:54.3591013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:40:54.3591473Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:40:54.3591888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:54.3592260Z return self.act(input) 2025-08-14T21:40:54.3592384Z 2025-08-14T21:40:54.3592472Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3592757Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3592991Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3593212Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3593444Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3593714Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3593939Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3594161Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3594419Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:54.3594822Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:54.3595173Z return mod(**inputs) 2025-08-14T21:40:54.3595569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:54.3595985Z outputs = self.bert( 2025-08-14T21:40:54.3596371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:54.3596795Z encoder_outputs = self.encoder( 2025-08-14T21:40:54.3597200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:54.3597622Z layer_outputs = layer_module( 2025-08-14T21:40:54.3597997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:54.3598383Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:54.3598792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:40:54.3599216Z self_attention_outputs = self.attention( 2025-08-14T21:40:54.3599616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:54.3600010Z return func(*args, **kwargs) 2025-08-14T21:40:54.3600402Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:40:54.3600797Z self_outputs = self.self( 2025-08-14T21:40:54.3601182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:54.3601582Z return func(*args, **kwargs) 2025-08-14T21:40:54.3601982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:40:54.3602461Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:54.3602667Z 2025-08-14T21:40:54.3602764Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3602991Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3603236Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:54.3603623Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:54.3603975Z return mod(**inputs) 2025-08-14T21:40:54.3604376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:54.3604799Z outputs = self.bert( 2025-08-14T21:40:54.3605180Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:54.3605595Z encoder_outputs = self.encoder( 2025-08-14T21:40:54.3605999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:54.3606428Z layer_outputs = layer_module( 2025-08-14T21:40:54.3606799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:54.3607186Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:54.3608470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:40:54.3608938Z layer_output = apply_chunking_to_forward( 2025-08-14T21:40:54.3609386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:40:54.3609933Z return forward_fn(*input_tensors) 2025-08-14T21:40:54.3610419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:40:54.3610921Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:40:54.3611388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:40:54.3611846Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:40:54.3612262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:54.3612650Z return self.act(input) 2025-08-14T21:40:54.3612775Z 2025-08-14T21:40:54.3612871Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3613096Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3613327Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3613553Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3613776Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3614002Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3614232Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3614451Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3614714Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:54.3615129Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:54.3615503Z return mod(**inputs) 2025-08-14T21:40:54.3615892Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:54.3616309Z outputs = self.bert( 2025-08-14T21:40:54.3616703Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:54.3617130Z encoder_outputs = self.encoder( 2025-08-14T21:40:54.3617558Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:54.3617980Z layer_outputs = layer_module( 2025-08-14T21:40:54.3618364Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:54.3618754Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:54.3619179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:40:54.3619750Z self_attention_outputs = self.attention( 2025-08-14T21:40:54.3620175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:54.3620593Z return func(*args, **kwargs) 2025-08-14T21:40:54.3621001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:40:54.3621421Z self_outputs = self.self( 2025-08-14T21:40:54.3621809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:54.3622237Z return func(*args, **kwargs) 2025-08-14T21:40:54.3622641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:40:54.3623135Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:54.3623336Z 2025-08-14T21:40:54.3623424Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3623656Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3623917Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:54.3624356Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:54.3624719Z return mod(**inputs) 2025-08-14T21:40:54.3625119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:54.3625573Z outputs = self.bert( 2025-08-14T21:40:54.3625956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:54.3626383Z encoder_outputs = self.encoder( 2025-08-14T21:40:54.3626799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:54.3627222Z layer_outputs = layer_module( 2025-08-14T21:40:54.3627607Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:54.3628008Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:54.3628427Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:40:54.3628851Z layer_output = apply_chunking_to_forward( 2025-08-14T21:40:54.3629296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:40:54.3629731Z return forward_fn(*input_tensors) 2025-08-14T21:40:54.3630171Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:40:54.3630669Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:40:54.3631134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:40:54.3631589Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:40:54.3631998Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:54.3632381Z return self.act(input) 2025-08-14T21:40:54.3632503Z 2025-08-14T21:40:54.3632595Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3632828Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3633053Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3633279Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3633507Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3633725Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3633959Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3634185Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3634438Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:54.3634834Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:54.3635195Z return mod(**inputs) 2025-08-14T21:40:54.3635572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:54.3635970Z outputs = self.bert( 2025-08-14T21:40:54.3636349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:54.3636766Z encoder_outputs = self.encoder( 2025-08-14T21:40:54.3637159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:54.3637558Z layer_outputs = layer_module( 2025-08-14T21:40:54.3637929Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:54.3638312Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:54.3638709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:40:54.3639121Z self_attention_outputs = self.attention( 2025-08-14T21:40:54.3639574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:54.3639973Z return func(*args, **kwargs) 2025-08-14T21:40:54.3640376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:40:54.3640830Z self_outputs = self.self( 2025-08-14T21:40:54.3641236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:54.3641628Z return func(*args, **kwargs) 2025-08-14T21:40:54.3642244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:40:54.3642730Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:54.3642930Z 2025-08-14T21:40:54.3643029Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3643268Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3643532Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:54.3643936Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:54.3644284Z return mod(**inputs) 2025-08-14T21:40:54.3644687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:54.3645117Z outputs = self.bert( 2025-08-14T21:40:54.3645499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:54.3645927Z encoder_outputs = self.encoder( 2025-08-14T21:40:54.3646324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:54.3646741Z layer_outputs = layer_module( 2025-08-14T21:40:54.3647116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:54.3647516Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:54.3647930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:40:54.3648364Z layer_output = apply_chunking_to_forward( 2025-08-14T21:40:54.3648798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:40:54.3649230Z return forward_fn(*input_tensors) 2025-08-14T21:40:54.3649676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:40:54.3650162Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:40:54.3650625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:40:54.3651096Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:40:54.3651509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:54.3651880Z return self.act(input) 2025-08-14T21:40:54.3652014Z 2025-08-14T21:40:54.3652103Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3652338Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3652559Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3652788Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3653014Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3653239Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3653456Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3653684Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3653943Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:54.3654332Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:54.3654792Z return mod(**inputs) 2025-08-14T21:40:54.3655191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:54.3655596Z outputs = self.bert( 2025-08-14T21:40:54.3656064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:54.3656491Z encoder_outputs = self.encoder( 2025-08-14T21:40:54.3656911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:54.3657318Z layer_outputs = layer_module( 2025-08-14T21:40:54.3657698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:54.3658090Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:54.3658504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:40:54.3658932Z self_attention_outputs = self.attention( 2025-08-14T21:40:54.3659353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:54.3659844Z return func(*args, **kwargs) 2025-08-14T21:40:54.3660242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:40:54.3660666Z self_outputs = self.self( 2025-08-14T21:40:54.3661059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:54.3661468Z return func(*args, **kwargs) 2025-08-14T21:40:54.3661865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:40:54.3662356Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:54.3662554Z 2025-08-14T21:40:54.3662656Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3662881Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3663145Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:54.3663546Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:54.3663905Z return mod(**inputs) 2025-08-14T21:40:54.3664296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:54.3664713Z outputs = self.bert( 2025-08-14T21:40:54.3665102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:54.3665526Z encoder_outputs = self.encoder( 2025-08-14T21:40:54.3665934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:54.3666357Z layer_outputs = layer_module( 2025-08-14T21:40:54.3666739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:54.3667123Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:54.3667543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:40:54.3667985Z layer_output = apply_chunking_to_forward( 2025-08-14T21:40:54.3668416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:40:54.3668847Z return forward_fn(*input_tensors) 2025-08-14T21:40:54.3669306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:40:54.3669802Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:40:54.3670310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:40:54.3670776Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:40:54.3671193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:54.3671604Z return self.act(input) 2025-08-14T21:40:54.3671728Z 2025-08-14T21:40:54.3671815Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3672047Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3672275Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3672497Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3672725Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3672950Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3673168Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3673395Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3673654Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:54.3674056Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:54.3674408Z return mod(**inputs) 2025-08-14T21:40:54.3674799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:54.3675220Z outputs = self.bert( 2025-08-14T21:40:54.3675606Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:54.3676037Z encoder_outputs = self.encoder( 2025-08-14T21:40:54.3676442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:54.3676844Z layer_outputs = layer_module( 2025-08-14T21:40:54.3677205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:54.3677590Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:54.3678002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:40:54.3678412Z self_attention_outputs = self.attention( 2025-08-14T21:40:54.3678833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:54.3679239Z return func(*args, **kwargs) 2025-08-14T21:40:54.3679649Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:40:54.3680052Z self_outputs = self.self( 2025-08-14T21:40:54.3680449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:54.3680849Z return func(*args, **kwargs) 2025-08-14T21:40:54.3681231Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:40:54.3681699Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:54.3681907Z 2025-08-14T21:40:54.3681995Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3682227Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3682483Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:54.3682885Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:54.3683232Z return mod(**inputs) 2025-08-14T21:40:54.3683612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:54.3684033Z outputs = self.bert( 2025-08-14T21:40:54.3684422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:54.3684845Z encoder_outputs = self.encoder( 2025-08-14T21:40:54.3685282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:54.3685706Z layer_outputs = layer_module( 2025-08-14T21:40:54.3686087Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:54.3686532Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:54.3686948Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:40:54.3687379Z layer_output = apply_chunking_to_forward( 2025-08-14T21:40:54.3687820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:40:54.3688251Z return forward_fn(*input_tensors) 2025-08-14T21:40:54.3688710Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:40:54.3689209Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:40:54.3689679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:40:54.3690138Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:40:54.3690563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:54.3690961Z return self.act(input) 2025-08-14T21:40:54.3691082Z 2025-08-14T21:40:54.3691169Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3691406Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3691637Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3691862Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3692079Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3692305Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3692532Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3692752Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3693014Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:54.3693415Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:54.3693775Z return mod(**inputs) 2025-08-14T21:40:54.3694179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:54.3694598Z outputs = self.bert( 2025-08-14T21:40:54.3694993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:54.3695409Z encoder_outputs = self.encoder( 2025-08-14T21:40:54.3695828Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:54.3696247Z layer_outputs = layer_module( 2025-08-14T21:40:54.3696628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:54.3697026Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:54.3697453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:40:54.3697893Z self_attention_outputs = self.attention( 2025-08-14T21:40:54.3698306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:54.3698714Z return func(*args, **kwargs) 2025-08-14T21:40:54.3699121Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:40:54.3699618Z self_outputs = self.self( 2025-08-14T21:40:54.3700018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:54.3700426Z return func(*args, **kwargs) 2025-08-14T21:40:54.3700881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:40:54.3701355Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:54.3701564Z 2025-08-14T21:40:54.3701690Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3701926Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3702186Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:54.3702580Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:54.3702937Z return mod(**inputs) 2025-08-14T21:40:54.3703331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:54.3703742Z outputs = self.bert( 2025-08-14T21:40:54.3704133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:54.3704561Z encoder_outputs = self.encoder( 2025-08-14T21:40:54.3704976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:54.3705393Z layer_outputs = layer_module( 2025-08-14T21:40:54.3705777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:54.3706171Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:54.3706580Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:40:54.3707014Z layer_output = apply_chunking_to_forward( 2025-08-14T21:40:54.3707455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:40:54.3707886Z return forward_fn(*input_tensors) 2025-08-14T21:40:54.3708326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:40:54.3708826Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:40:54.3709291Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:40:54.3709752Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:40:54.3710162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:54.3710536Z return self.act(input) 2025-08-14T21:40:54.3710657Z 2025-08-14T21:40:54.3710753Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3710980Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3711208Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3711436Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3711653Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3711878Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3712103Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3712327Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3712576Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:54.3713019Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:54.3713390Z return mod(**inputs) 2025-08-14T21:40:54.3713775Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:54.3714189Z outputs = self.bert( 2025-08-14T21:40:54.3714579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:54.3715011Z encoder_outputs = self.encoder( 2025-08-14T21:40:54.3715422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:54.3715889Z layer_outputs = layer_module( 2025-08-14T21:40:54.3716271Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:54.3716670Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:54.3717157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:40:54.3717600Z self_attention_outputs = self.attention( 2025-08-14T21:40:54.3718018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:54.3718418Z return func(*args, **kwargs) 2025-08-14T21:40:54.3718832Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:40:54.3719268Z self_outputs = self.self( 2025-08-14T21:40:54.3719654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:40:54.3720057Z return func(*args, **kwargs) 2025-08-14T21:40:54.3720469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:40:54.3720966Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:40:54.3721166Z 2025-08-14T21:40:54.3721255Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3721492Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3721754Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:54.3722147Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:54.3722527Z return mod(**inputs) 2025-08-14T21:40:54.3722929Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-08-14T21:40:54.3723356Z outputs = self.bert( 2025-08-14T21:40:54.3723736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:40:54.3724160Z encoder_outputs = self.encoder( 2025-08-14T21:40:54.3724574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:40:54.3725000Z layer_outputs = layer_module( 2025-08-14T21:40:54.3725358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:40:54.3725744Z return super().__call__(*args, **kwargs) 2025-08-14T21:40:54.3726151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:40:54.3726570Z layer_output = apply_chunking_to_forward( 2025-08-14T21:40:54.3727008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:40:54.3727442Z return forward_fn(*input_tensors) 2025-08-14T21:40:54.3727890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:40:54.3728380Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:40:54.3728846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:40:54.3729314Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:40:54.3729720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:40:54.3730093Z return self.act(input) 2025-08-14T21:40:54.3730223Z 2025-08-14T21:40:54.3730311Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3730544Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3730766Z cudagraph partition due to non gpu ops 2025-08-14T21:40:54.3731084Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:40:54.3731490Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:40:54.3731843Z return mod(**inputs) 2025-08-14T21:40:54.3732240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1328, in forward 2025-08-14T21:40:54.3732835Z masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1)) 2025-08-14T21:40:54.3733096Z 2025-08-14T21:41:03.3019189Z Compilation time (from dynamo_timed): 17.261613912 2025-08-14T21:41:03.3107427Z pass 2025-08-14T21:41:03.3108010Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:41:03.3108859Z TIMING: _recursive_pre_grad_passes:0.03608 _recursive_joint_graph_passes:0.38356 _recursive_post_grad_passes:0.07538 async_compile.wait:0.82977 code_gen:8.65626 inductor_compile:10.31208 backend_compile:14.55093 gc:0.00198 entire_frame_compile:17.26161 total_wall_time:17.26161 2025-08-14T21:41:03.3109881Z STATS: call_* op count: 289 | FakeTensorMode.__torch_dispatch__:24084 | FakeTensor.__torch_dispatch__:3845 | ProxyTorchDispatchMode.__torch_dispatch__:5315 2025-08-14T21:41:03.3110389Z Dynamo produced 1 graphs covering 289 ops with 0 graph breaks (0 unique) 2025-08-14T21:41:08.9310884Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:41:08.9311892Z from pkg_resources import resource_filename 2025-08-14T21:41:09.5739998Z 2025-08-14T21:41:10.7745000Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:41:10.7746850Z loading model: 0it [00:01, ?it/s] 2025-08-14T21:41:10.7760335Z cpu eval BertForQuestionAnswering 2025-08-14T21:41:11.1919756Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:41:11.3812414Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:41:11.5750974Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:41:21.4038458Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4039297Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4039572Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4039783Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4039986Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4040198Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4040407Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4040605Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4040807Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4041013Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4041246Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4041455Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4041653Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4042146Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4042361Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4042648Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4042934Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4043231Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4043528Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4043861Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:21.4044399Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:21.4044904Z return mod(**inputs) 2025-08-14T21:41:21.4045707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:41:21.4046097Z outputs = self.bert( 2025-08-14T21:41:21.4046463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:41:21.4046845Z encoder_outputs = self.encoder( 2025-08-14T21:41:21.4047318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:41:21.4047721Z layer_outputs = layer_module( 2025-08-14T21:41:21.4048067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:21.4048424Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:21.4048808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:41:21.4049195Z self_attention_outputs = self.attention( 2025-08-14T21:41:21.4049576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:41:21.4049942Z return func(*args, **kwargs) 2025-08-14T21:41:21.4050305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:41:21.4050682Z self_outputs = self.self( 2025-08-14T21:41:21.4051031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:41:21.4051396Z return func(*args, **kwargs) 2025-08-14T21:41:21.4051858Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:41:21.4052563Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:41:21.4052758Z 2025-08-14T21:41:21.4052844Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4053062Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4053309Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:21.4053674Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:21.4054012Z return mod(**inputs) 2025-08-14T21:41:21.4054457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:41:21.4054840Z outputs = self.bert( 2025-08-14T21:41:21.4055198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:41:21.4055611Z encoder_outputs = self.encoder( 2025-08-14T21:41:21.4056014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:41:21.4056412Z layer_outputs = layer_module( 2025-08-14T21:41:21.4056780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:21.4057169Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:21.4057582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:41:21.4058012Z layer_output = apply_chunking_to_forward( 2025-08-14T21:41:21.4058447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:41:21.4058884Z return forward_fn(*input_tensors) 2025-08-14T21:41:21.4059318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:41:21.4059984Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:41:21.4060446Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:41:21.4060903Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:41:21.4061407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:41:21.4061799Z return self.act(input) 2025-08-14T21:41:21.4061930Z 2025-08-14T21:41:21.4062017Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4062292Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4062512Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4062737Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4062963Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4063178Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4063404Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4063631Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4063881Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:21.4064276Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:21.4064637Z return mod(**inputs) 2025-08-14T21:41:21.4065029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:41:21.4065433Z outputs = self.bert( 2025-08-14T21:41:21.4065816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:41:21.4066234Z encoder_outputs = self.encoder( 2025-08-14T21:41:21.4066637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:41:21.4067035Z layer_outputs = layer_module( 2025-08-14T21:41:21.4067377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:21.4067736Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:21.4068108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:41:21.4068498Z self_attention_outputs = self.attention( 2025-08-14T21:41:21.4068878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:41:21.4069249Z return func(*args, **kwargs) 2025-08-14T21:41:21.4069612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:41:21.4069990Z self_outputs = self.self( 2025-08-14T21:41:21.4070347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:41:21.4070703Z return func(*args, **kwargs) 2025-08-14T21:41:21.4071067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:41:21.4071503Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:41:21.4071688Z 2025-08-14T21:41:21.4071772Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4071976Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4072214Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:21.4072574Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:21.4072901Z return mod(**inputs) 2025-08-14T21:41:21.4073257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:41:21.4073629Z outputs = self.bert( 2025-08-14T21:41:21.4073986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:41:21.4074360Z encoder_outputs = self.encoder( 2025-08-14T21:41:21.4074730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:41:21.4075107Z layer_outputs = layer_module( 2025-08-14T21:41:21.4075514Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:21.4075874Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:21.4076249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:41:21.4076673Z layer_output = apply_chunking_to_forward( 2025-08-14T21:41:21.4077068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:41:21.4077461Z return forward_fn(*input_tensors) 2025-08-14T21:41:21.4077866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:41:21.4078314Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:41:21.4078741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:41:21.4079162Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:41:21.4079536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:41:21.4079869Z return self.act(input) 2025-08-14T21:41:21.4079988Z 2025-08-14T21:41:21.4080068Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4080280Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4080486Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4080681Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4080883Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4081086Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4081279Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4081590Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4081820Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:21.4082176Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:21.4082501Z return mod(**inputs) 2025-08-14T21:41:21.4082854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:41:21.4083228Z outputs = self.bert( 2025-08-14T21:41:21.4083570Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:41:21.4083957Z encoder_outputs = self.encoder( 2025-08-14T21:41:21.4084346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:41:21.4084727Z layer_outputs = layer_module( 2025-08-14T21:41:21.4085072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:21.4085437Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:21.4085830Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:41:21.4086219Z self_attention_outputs = self.attention( 2025-08-14T21:41:21.4086604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:41:21.4086992Z return func(*args, **kwargs) 2025-08-14T21:41:21.4087353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:41:21.4087732Z self_outputs = self.self( 2025-08-14T21:41:21.4088099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:41:21.4088480Z return func(*args, **kwargs) 2025-08-14T21:41:21.4088843Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:41:21.4089326Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:41:21.4089517Z 2025-08-14T21:41:21.4089607Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4089833Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4090068Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:21.4090460Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:21.4090781Z return mod(**inputs) 2025-08-14T21:41:21.4091131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:41:21.4091507Z outputs = self.bert( 2025-08-14T21:41:21.4091864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:41:21.4092245Z encoder_outputs = self.encoder( 2025-08-14T21:41:21.4092618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:41:21.4093007Z layer_outputs = layer_module( 2025-08-14T21:41:21.4093376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:21.4093731Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:21.4094115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:41:21.4094504Z layer_output = apply_chunking_to_forward( 2025-08-14T21:41:21.4094910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:41:21.4095295Z return forward_fn(*input_tensors) 2025-08-14T21:41:21.4095697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:41:21.4096150Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:41:21.4096565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:41:21.4096976Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:41:21.4097358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:41:21.4097707Z return self.act(input) 2025-08-14T21:41:21.4097819Z 2025-08-14T21:41:21.4097900Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4098114Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4098323Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4098524Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4098733Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4098943Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4099151Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4099350Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4099717Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:21.4100093Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:21.4100428Z return mod(**inputs) 2025-08-14T21:41:21.4100822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:41:21.4101233Z outputs = self.bert( 2025-08-14T21:41:21.4101607Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:41:21.4102115Z encoder_outputs = self.encoder( 2025-08-14T21:41:21.4102585Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:41:21.4102970Z layer_outputs = layer_module( 2025-08-14T21:41:21.4103369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:21.4103785Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:21.4104179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:41:21.4104571Z self_attention_outputs = self.attention( 2025-08-14T21:41:21.4104989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:41:21.4105377Z return func(*args, **kwargs) 2025-08-14T21:41:21.4105740Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:41:21.4106115Z self_outputs = self.self( 2025-08-14T21:41:21.4106483Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:41:21.4106862Z return func(*args, **kwargs) 2025-08-14T21:41:21.4107231Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:41:21.4107659Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:41:21.4107847Z 2025-08-14T21:41:21.4107927Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4108143Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4108372Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:21.4108733Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:21.4109066Z return mod(**inputs) 2025-08-14T21:41:21.4109428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:41:21.4109799Z outputs = self.bert( 2025-08-14T21:41:21.4110156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:41:21.4110543Z encoder_outputs = self.encoder( 2025-08-14T21:41:21.4110916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:41:21.4111299Z layer_outputs = layer_module( 2025-08-14T21:41:21.4111645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:21.4112014Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:21.4112386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:41:21.4112781Z layer_output = apply_chunking_to_forward( 2025-08-14T21:41:21.4113206Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:41:21.4113599Z return forward_fn(*input_tensors) 2025-08-14T21:41:21.4114002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:41:21.4114455Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:41:21.4114881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:41:21.4115298Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:41:21.4115680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:41:21.4116024Z return self.act(input) 2025-08-14T21:41:21.4116134Z 2025-08-14T21:41:21.4116222Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4116430Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4116639Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4116848Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4117051Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4117259Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4117547Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4117751Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4117991Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:21.4118360Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:21.4118719Z return mod(**inputs) 2025-08-14T21:41:21.4119077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:41:21.4119459Z outputs = self.bert( 2025-08-14T21:41:21.4119817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:41:21.4120196Z encoder_outputs = self.encoder( 2025-08-14T21:41:21.4120579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:41:21.4130463Z layer_outputs = layer_module( 2025-08-14T21:41:21.4131042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:21.4131464Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:21.4131907Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:41:21.4132352Z self_attention_outputs = self.attention( 2025-08-14T21:41:21.4132780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:41:21.4133189Z return func(*args, **kwargs) 2025-08-14T21:41:21.4133593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:41:21.4134018Z self_outputs = self.self( 2025-08-14T21:41:21.4134420Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:41:21.4134831Z return func(*args, **kwargs) 2025-08-14T21:41:21.4135219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:41:21.4135701Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:41:21.4135909Z 2025-08-14T21:41:21.4136011Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4136235Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4136498Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:21.4136900Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:21.4137257Z return mod(**inputs) 2025-08-14T21:41:21.4137639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:41:21.4138046Z outputs = self.bert( 2025-08-14T21:41:21.4138433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:41:21.4138838Z encoder_outputs = self.encoder( 2025-08-14T21:41:21.4139244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:41:21.4139759Z layer_outputs = layer_module( 2025-08-14T21:41:21.4140136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:21.4140517Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:21.4140926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:41:21.4141350Z layer_output = apply_chunking_to_forward( 2025-08-14T21:41:21.4142049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:41:21.4142490Z return forward_fn(*input_tensors) 2025-08-14T21:41:21.4143162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:41:21.4143657Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:41:21.4144199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:41:21.4144651Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:41:21.4145060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:41:21.4145427Z return self.act(input) 2025-08-14T21:41:21.4145547Z 2025-08-14T21:41:21.4145635Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4145866Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4146099Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4146317Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4146539Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4146764Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4146978Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4147198Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4147452Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:21.4147847Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:21.4148194Z return mod(**inputs) 2025-08-14T21:41:21.4148577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:41:21.4148980Z outputs = self.bert( 2025-08-14T21:41:21.4149353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:41:21.4149766Z encoder_outputs = self.encoder( 2025-08-14T21:41:21.4150171Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:41:21.4150580Z layer_outputs = layer_module( 2025-08-14T21:41:21.4150943Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:21.4151334Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:21.4151746Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:41:21.4152161Z self_attention_outputs = self.attention( 2025-08-14T21:41:21.4152548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:41:21.4152925Z return func(*args, **kwargs) 2025-08-14T21:41:21.4153299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:41:21.4153673Z self_outputs = self.self( 2025-08-14T21:41:21.4154040Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:41:21.4154414Z return func(*args, **kwargs) 2025-08-14T21:41:21.4154778Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:41:21.4155223Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:41:21.4155417Z 2025-08-14T21:41:21.4155496Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4155711Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4155944Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:21.4156309Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:21.4156634Z return mod(**inputs) 2025-08-14T21:41:21.4156995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:41:21.4157411Z outputs = self.bert( 2025-08-14T21:41:21.4157775Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:41:21.4158160Z encoder_outputs = self.encoder( 2025-08-14T21:41:21.4158565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:41:21.4158948Z layer_outputs = layer_module( 2025-08-14T21:41:21.4159302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:21.4159668Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:21.4160046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:41:21.4160446Z layer_output = apply_chunking_to_forward( 2025-08-14T21:41:21.4160853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:41:21.4161246Z return forward_fn(*input_tensors) 2025-08-14T21:41:21.4161658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:41:21.4162120Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:41:21.4162552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:41:21.4162968Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:41:21.4163351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:41:21.4163700Z return self.act(input) 2025-08-14T21:41:21.4163812Z 2025-08-14T21:41:21.4163901Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4164108Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4164316Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4164525Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4164725Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4164932Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4165140Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4165342Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4165575Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:21.4165940Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:21.4166271Z return mod(**inputs) 2025-08-14T21:41:21.4166628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:41:21.4167005Z outputs = self.bert( 2025-08-14T21:41:21.4167362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:41:21.4167741Z encoder_outputs = self.encoder( 2025-08-14T21:41:21.4168125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:41:21.4168508Z layer_outputs = layer_module( 2025-08-14T21:41:21.4168856Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:21.4169213Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:21.4169598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:41:21.4169996Z self_attention_outputs = self.attention( 2025-08-14T21:41:21.4170371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:41:21.4170747Z return func(*args, **kwargs) 2025-08-14T21:41:21.4171119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:41:21.4171556Z self_outputs = self.self( 2025-08-14T21:41:21.4171923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:41:21.4172300Z return func(*args, **kwargs) 2025-08-14T21:41:21.4172701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:41:21.4173130Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:41:21.4173320Z 2025-08-14T21:41:21.4173398Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4173611Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4173847Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:21.4174216Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:21.4174559Z return mod(**inputs) 2025-08-14T21:41:21.4174943Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:41:21.4175346Z outputs = self.bert( 2025-08-14T21:41:21.4175737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:41:21.4176166Z encoder_outputs = self.encoder( 2025-08-14T21:41:21.4176569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:41:21.4176973Z layer_outputs = layer_module( 2025-08-14T21:41:21.4177347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:21.4177739Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:21.4178142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:41:21.4178568Z layer_output = apply_chunking_to_forward( 2025-08-14T21:41:21.4179003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:41:21.4179435Z return forward_fn(*input_tensors) 2025-08-14T21:41:21.4179962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:41:21.4180467Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:41:21.4180924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:41:21.4181382Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:41:21.4181789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:41:21.4182160Z return self.act(input) 2025-08-14T21:41:21.4182282Z 2025-08-14T21:41:21.4182381Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4182607Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4182833Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4183057Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4183282Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4183498Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4183717Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4183937Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4184185Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:21.4184570Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:21.4184938Z return mod(**inputs) 2025-08-14T21:41:21.4185314Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:41:21.4185713Z outputs = self.bert( 2025-08-14T21:41:21.4186138Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:41:21.4186555Z encoder_outputs = self.encoder( 2025-08-14T21:41:21.4186949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:41:21.4187401Z layer_outputs = layer_module( 2025-08-14T21:41:21.4187756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:21.4188132Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:21.4188512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:41:21.4188909Z self_attention_outputs = self.attention( 2025-08-14T21:41:21.4189298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:41:21.4189668Z return func(*args, **kwargs) 2025-08-14T21:41:21.4190046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:41:21.4190431Z self_outputs = self.self( 2025-08-14T21:41:21.4190792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:41:21.4191162Z return func(*args, **kwargs) 2025-08-14T21:41:21.4191537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:41:21.4191979Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:41:21.4192163Z 2025-08-14T21:41:21.4192250Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4192452Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4192688Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:21.4193049Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:21.4193377Z return mod(**inputs) 2025-08-14T21:41:21.4193742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:41:21.4194118Z outputs = self.bert( 2025-08-14T21:41:21.4194467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:41:21.4194861Z encoder_outputs = self.encoder( 2025-08-14T21:41:21.4195238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:41:21.4195621Z layer_outputs = layer_module( 2025-08-14T21:41:21.4195961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:21.4196328Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:21.4196717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:41:21.4197114Z layer_output = apply_chunking_to_forward( 2025-08-14T21:41:21.4197510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:41:21.4197912Z return forward_fn(*input_tensors) 2025-08-14T21:41:21.4198324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:41:21.4198777Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:41:21.4199209Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:41:21.4199635Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:41:21.4200019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:41:21.4200391Z return self.act(input) 2025-08-14T21:41:21.4200513Z 2025-08-14T21:41:21.4200592Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4200802Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4201006Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4201262Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4201470Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4201676Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4201882Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4202080Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4202310Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:21.4202665Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:21.4202989Z return mod(**inputs) 2025-08-14T21:41:21.4203350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:41:21.4203722Z outputs = self.bert( 2025-08-14T21:41:21.4204085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:41:21.4204476Z encoder_outputs = self.encoder( 2025-08-14T21:41:21.4204847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:41:21.4205213Z layer_outputs = layer_module( 2025-08-14T21:41:21.4205549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:21.4205904Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:21.4206276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:41:21.4206698Z self_attention_outputs = self.attention( 2025-08-14T21:41:21.4207087Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:41:21.4207464Z return func(*args, **kwargs) 2025-08-14T21:41:21.4207852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:41:21.4208221Z self_outputs = self.self( 2025-08-14T21:41:21.4208572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:41:21.4208934Z return func(*args, **kwargs) 2025-08-14T21:41:21.4209294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:41:21.4209727Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:41:21.4209908Z 2025-08-14T21:41:21.4209995Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4210200Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4210438Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:21.4210801Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:21.4211125Z return mod(**inputs) 2025-08-14T21:41:21.4211481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:41:21.4211860Z outputs = self.bert( 2025-08-14T21:41:21.4212214Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:41:21.4212593Z encoder_outputs = self.encoder( 2025-08-14T21:41:21.4212967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:41:21.4213347Z layer_outputs = layer_module( 2025-08-14T21:41:21.4213695Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:21.4214097Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:21.4214484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:41:21.4214886Z layer_output = apply_chunking_to_forward( 2025-08-14T21:41:21.4215333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:41:21.4215759Z return forward_fn(*input_tensors) 2025-08-14T21:41:21.4216193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:41:21.4216681Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:41:21.4217124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:41:21.4217579Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:41:21.4217988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:41:21.4218350Z return self.act(input) 2025-08-14T21:41:21.4218469Z 2025-08-14T21:41:21.4218554Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4218782Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4219003Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4219214Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4219434Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4219757Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4219978Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4220208Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4220470Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:21.4220882Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:21.4221227Z return mod(**inputs) 2025-08-14T21:41:21.4221623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:41:21.4222033Z outputs = self.bert( 2025-08-14T21:41:21.4222409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:41:21.4222826Z encoder_outputs = self.encoder( 2025-08-14T21:41:21.4223228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:41:21.4223642Z layer_outputs = layer_module( 2025-08-14T21:41:21.4224007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:21.4224411Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:21.4224825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:41:21.4225253Z self_attention_outputs = self.attention( 2025-08-14T21:41:21.4225671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:41:21.4226074Z return func(*args, **kwargs) 2025-08-14T21:41:21.4226486Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:41:21.4226901Z self_outputs = self.self( 2025-08-14T21:41:21.4227297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:41:21.4227702Z return func(*args, **kwargs) 2025-08-14T21:41:21.4228087Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:41:21.4228520Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:41:21.4228709Z 2025-08-14T21:41:21.4228786Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4229053Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4229285Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:21.4229653Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:21.4230016Z return mod(**inputs) 2025-08-14T21:41:21.4230369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:41:21.4230749Z outputs = self.bert( 2025-08-14T21:41:21.4231103Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:41:21.4231483Z encoder_outputs = self.encoder( 2025-08-14T21:41:21.4231845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:41:21.4232221Z layer_outputs = layer_module( 2025-08-14T21:41:21.4232567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:21.4232922Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:21.4233289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:41:21.4233678Z layer_output = apply_chunking_to_forward( 2025-08-14T21:41:21.4234071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:41:21.4234448Z return forward_fn(*input_tensors) 2025-08-14T21:41:21.4234846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:41:21.4235287Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:41:21.4235701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:41:21.4236103Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:41:21.4236474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:41:21.4236809Z return self.act(input) 2025-08-14T21:41:21.4236920Z 2025-08-14T21:41:21.4237004Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4237205Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4237410Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4237612Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4237806Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4238005Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4238209Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4238403Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4238632Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:21.4238985Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:21.4239297Z return mod(**inputs) 2025-08-14T21:41:21.4239656Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:41:21.4240023Z outputs = self.bert( 2025-08-14T21:41:21.4240368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:41:21.4240736Z encoder_outputs = self.encoder( 2025-08-14T21:41:21.4241100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:41:21.4241468Z layer_outputs = layer_module( 2025-08-14T21:41:21.4242001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:21.4242373Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:21.4242849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:41:21.4243243Z self_attention_outputs = self.attention( 2025-08-14T21:41:21.4243614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:41:21.4244045Z return func(*args, **kwargs) 2025-08-14T21:41:21.4244417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:41:21.4244794Z self_outputs = self.self( 2025-08-14T21:41:21.4245145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:41:21.4245512Z return func(*args, **kwargs) 2025-08-14T21:41:21.4245874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:41:21.4246299Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:41:21.4246489Z 2025-08-14T21:41:21.4246568Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4246778Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4247016Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:21.4247369Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:21.4247692Z return mod(**inputs) 2025-08-14T21:41:21.4248048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:41:21.4248410Z outputs = self.bert( 2025-08-14T21:41:21.4248764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:41:21.4249143Z encoder_outputs = self.encoder( 2025-08-14T21:41:21.4249511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:41:21.4249883Z layer_outputs = layer_module( 2025-08-14T21:41:21.4250225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:21.4250585Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:21.4250963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:41:21.4251354Z layer_output = apply_chunking_to_forward( 2025-08-14T21:41:21.4251751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:41:21.4252140Z return forward_fn(*input_tensors) 2025-08-14T21:41:21.4252535Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:41:21.4252985Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:41:21.4253422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:41:21.4253853Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:41:21.4254225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:41:21.4254569Z return self.act(input) 2025-08-14T21:41:21.4254683Z 2025-08-14T21:41:21.4254772Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4254981Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4255190Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4255394Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4255598Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4255798Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4256004Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4256208Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4256496Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:21.4256861Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:21.4257186Z return mod(**inputs) 2025-08-14T21:41:21.4257536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:41:21.4257959Z outputs = self.bert( 2025-08-14T21:41:21.4258318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:41:21.4258705Z encoder_outputs = self.encoder( 2025-08-14T21:41:21.4259077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:41:21.4259566Z layer_outputs = layer_module( 2025-08-14T21:41:21.4259957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:21.4260361Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:21.4260780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-08-14T21:41:21.4261210Z self_attention_outputs = self.attention( 2025-08-14T21:41:21.4261639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:41:21.4262012Z return func(*args, **kwargs) 2025-08-14T21:41:21.4262389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-08-14T21:41:21.4262778Z self_outputs = self.self( 2025-08-14T21:41:21.4263146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:41:21.4263513Z return func(*args, **kwargs) 2025-08-14T21:41:21.4263893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-08-14T21:41:21.4264335Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:41:21.4264529Z 2025-08-14T21:41:21.4264611Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4264825Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4265062Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:21.4265428Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:21.4265754Z return mod(**inputs) 2025-08-14T21:41:21.4266108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-08-14T21:41:21.4266488Z outputs = self.bert( 2025-08-14T21:41:21.4266845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-08-14T21:41:21.4267234Z encoder_outputs = self.encoder( 2025-08-14T21:41:21.4267608Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-08-14T21:41:21.4267992Z layer_outputs = layer_module( 2025-08-14T21:41:21.4268343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:41:21.4268703Z return super().__call__(*args, **kwargs) 2025-08-14T21:41:21.4269092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 614, in forward 2025-08-14T21:41:21.4269487Z layer_output = apply_chunking_to_forward( 2025-08-14T21:41:21.4269894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:41:21.4270289Z return forward_fn(*input_tensors) 2025-08-14T21:41:21.4270697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 622, in feed_forward_chunk 2025-08-14T21:41:21.4271206Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:41:21.4271634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 540, in forward 2025-08-14T21:41:21.4272055Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:41:21.4272471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:41:21.4272805Z return self.act(input) 2025-08-14T21:41:21.4272913Z 2025-08-14T21:41:21.4272990Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4273198Z cudagraph partition due to non gpu ops 2025-08-14T21:41:21.4273428Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:21.4273782Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:21.4274092Z return mod(**inputs) 2025-08-14T21:41:21.4274445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1799, in forward 2025-08-14T21:41:21.4274849Z start_loss = loss_fct(start_logits, start_positions) 2025-08-14T21:41:21.4275002Z 2025-08-14T21:41:21.4275104Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:41:21.4275458Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:41:21.4275771Z return mod(**inputs) 2025-08-14T21:41:21.4276117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1800, in forward 2025-08-14T21:41:21.4276505Z end_loss = loss_fct(end_logits, end_positions) 2025-08-14T21:41:21.4276659Z 2025-08-14T21:41:29.3900168Z Compilation time (from dynamo_timed): 16.473763494 2025-08-14T21:41:29.3900541Z pass 2025-08-14T21:41:29.3900915Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:41:29.3901912Z TIMING: _recursive_pre_grad_passes:0.03551 _recursive_joint_graph_passes:0.3625 _recursive_post_grad_passes:0.08773 async_compile.wait:0.00262 code_gen:7.78435 inductor_compile:9.50252 backend_compile:13.70324 gc:0.0002 entire_frame_compile:16.47376 total_wall_time:16.47376 2025-08-14T21:41:29.3903016Z STATS: call_* op count: 296 | FakeTensorMode.__torch_dispatch__:23997 | FakeTensor.__torch_dispatch__:3869 | ProxyTorchDispatchMode.__torch_dispatch__:5351 2025-08-14T21:41:29.3903614Z Dynamo produced 1 graphs covering 296 ops with 0 graph breaks (0 unique) 2025-08-14T21:41:35.1543472Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:41:35.1544441Z from pkg_resources import resource_filename 2025-08-14T21:41:35.8095961Z 2025-08-14T21:41:55.1173793Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:41:55.1174118Z loading model: 0it [00:19, ?it/s] 2025-08-14T21:41:55.1211077Z cpu eval BlenderbotForCausalLM 2025-08-14T21:41:55.3286269Z Compilation time (from dynamo_timed): 0 2025-08-14T21:41:55.3286733Z pass_due_to_skip 2025-08-14T21:41:55.3287199Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:41:55.3287561Z TIMING: total_wall_time:0 2025-08-14T21:41:55.3287770Z STATS: call_* op count: 0 2025-08-14T21:41:55.3288044Z Dynamo produced 0 graphs covering 0 ops with 0 graph breaks (0 unique) 2025-08-14T21:42:00.3570735Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:42:00.3571801Z from pkg_resources import resource_filename 2025-08-14T21:42:00.9357346Z 2025-08-14T21:42:01.8454116Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:42:01.8454440Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:42:01.8464204Z cpu eval BlenderbotSmallForCausalLM 2025-08-14T21:42:02.0174527Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:42:02.0739265Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:42:02.1258276Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:42:09.2261052Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2261423Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2261652Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2261873Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2262114Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2262379Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2262609Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2262834Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2263066Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2263296Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2263526Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2263747Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2263969Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2264189Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2264420Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2264688Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:09.2265118Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:09.2265499Z return mod(**inputs) 2025-08-14T21:42:09.2266016Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:42:09.2266507Z outputs = self.model.decoder( 2025-08-14T21:42:09.2266976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:09.2267471Z layer_outputs = decoder_layer( 2025-08-14T21:42:09.2267860Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:09.2268264Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:09.2268815Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:09.2269332Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:09.2269863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:09.2270373Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:09.2270844Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:09.2271364Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:09.2271585Z 2025-08-14T21:42:09.2271710Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:09.2272104Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:09.2272465Z return mod(**inputs) 2025-08-14T21:42:09.2272919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:42:09.2273404Z outputs = self.model.decoder( 2025-08-14T21:42:09.2274293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:09.2274857Z layer_outputs = decoder_layer( 2025-08-14T21:42:09.2275256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:09.2275673Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:09.2276860Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:09.2277370Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:09.2277863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:09.2278362Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:09.2278854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:09.2279350Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:09.2279526Z 2025-08-14T21:42:09.2279623Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2279842Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2280092Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:09.2280486Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:09.2280838Z return mod(**inputs) 2025-08-14T21:42:09.2281287Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:42:09.2281772Z outputs = self.model.decoder( 2025-08-14T21:42:09.2282234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:09.2282691Z layer_outputs = decoder_layer( 2025-08-14T21:42:09.2283077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:09.2283475Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:09.2283950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 430, in forward 2025-08-14T21:42:09.2284463Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:42:09.2284882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:09.2285252Z return self.act(input) 2025-08-14T21:42:09.2285370Z 2025-08-14T21:42:09.2285463Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2285679Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2285900Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2286115Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2286327Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2286548Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2286766Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2286974Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2287218Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:09.2287609Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:09.2287953Z return mod(**inputs) 2025-08-14T21:42:09.2288409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:42:09.2288880Z outputs = self.model.decoder( 2025-08-14T21:42:09.2289350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:09.2289849Z layer_outputs = decoder_layer( 2025-08-14T21:42:09.2290244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:09.2290641Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:09.2291118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:09.2291645Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:09.2292159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:09.2292657Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:09.2293116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:09.2293613Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:09.2293801Z 2025-08-14T21:42:09.2293907Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:09.2294291Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:09.2294640Z return mod(**inputs) 2025-08-14T21:42:09.2295085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:42:09.2295553Z outputs = self.model.decoder( 2025-08-14T21:42:09.2296013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:09.2296491Z layer_outputs = decoder_layer( 2025-08-14T21:42:09.2296853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:09.2297255Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:09.2297736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:09.2298240Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:09.2298737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:09.2299243Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:09.2299912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:09.2300417Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:09.2300597Z 2025-08-14T21:42:09.2300687Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2300921Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2301169Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:09.2301537Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:09.2301888Z return mod(**inputs) 2025-08-14T21:42:09.2302336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:42:09.2302806Z outputs = self.model.decoder( 2025-08-14T21:42:09.2303265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:09.2303721Z layer_outputs = decoder_layer( 2025-08-14T21:42:09.2304125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:09.2304510Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:09.2305064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 430, in forward 2025-08-14T21:42:09.2305560Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:42:09.2306003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:09.2306360Z return self.act(input) 2025-08-14T21:42:09.2306488Z 2025-08-14T21:42:09.2306572Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2306846Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2307071Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2307284Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2307504Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2307723Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2307934Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2308159Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2308413Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:09.2308793Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:09.2309142Z return mod(**inputs) 2025-08-14T21:42:09.2309593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:42:09.2310062Z outputs = self.model.decoder( 2025-08-14T21:42:09.2310505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:09.2310944Z layer_outputs = decoder_layer( 2025-08-14T21:42:09.2311293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:09.2311662Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:09.2312129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:09.2312621Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:09.2313110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:09.2313579Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:09.2314024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:09.2314507Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:09.2314693Z 2025-08-14T21:42:09.2314807Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:09.2315168Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:09.2315498Z return mod(**inputs) 2025-08-14T21:42:09.2315915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:42:09.2316361Z outputs = self.model.decoder( 2025-08-14T21:42:09.2316811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:09.2317248Z layer_outputs = decoder_layer( 2025-08-14T21:42:09.2317595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:09.2317949Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:09.2318388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:09.2318848Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:09.2319302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:09.2319750Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:09.2320226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:09.2320685Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:09.2320850Z 2025-08-14T21:42:09.2320939Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2321201Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2321439Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:09.2321799Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:09.2322122Z return mod(**inputs) 2025-08-14T21:42:09.2322540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:42:09.2322984Z outputs = self.model.decoder( 2025-08-14T21:42:09.2323443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:09.2323900Z layer_outputs = decoder_layer( 2025-08-14T21:42:09.2324272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:09.2324657Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:09.2325118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 430, in forward 2025-08-14T21:42:09.2325629Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:42:09.2326043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:09.2326407Z return self.act(input) 2025-08-14T21:42:09.2326523Z 2025-08-14T21:42:09.2326606Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2326834Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2327057Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2327269Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2327493Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2327715Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2327933Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2328144Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2328398Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:09.2328778Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:09.2329119Z return mod(**inputs) 2025-08-14T21:42:09.2329569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:42:09.2330033Z outputs = self.model.decoder( 2025-08-14T21:42:09.2330487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:09.2330952Z layer_outputs = decoder_layer( 2025-08-14T21:42:09.2331324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:09.2331708Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:09.2332170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:09.2332668Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:09.2333163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:09.2333654Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:09.2334110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:09.2334620Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:09.2334812Z 2025-08-14T21:42:09.2334975Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:09.2335362Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:09.2335714Z return mod(**inputs) 2025-08-14T21:42:09.2336205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:42:09.2336673Z outputs = self.model.decoder( 2025-08-14T21:42:09.2337125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:09.2337600Z layer_outputs = decoder_layer( 2025-08-14T21:42:09.2337975Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:09.2338366Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:09.2338827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:09.2339316Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:09.2339913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:09.2340407Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:09.2340877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:09.2341361Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:09.2341536Z 2025-08-14T21:42:09.2341630Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2342156Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2342415Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:09.2342807Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:09.2343150Z return mod(**inputs) 2025-08-14T21:42:09.2343563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:42:09.2344013Z outputs = self.model.decoder( 2025-08-14T21:42:09.2344450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:09.2344883Z layer_outputs = decoder_layer( 2025-08-14T21:42:09.2345245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:09.2345614Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:09.2346056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 430, in forward 2025-08-14T21:42:09.2346540Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:42:09.2346932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:09.2347271Z return self.act(input) 2025-08-14T21:42:09.2347386Z 2025-08-14T21:42:09.2347472Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2347680Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2347890Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2348098Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2348298Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2348506Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2348715Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2348915Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2349148Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:09.2349613Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:09.2349948Z return mod(**inputs) 2025-08-14T21:42:09.2350362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:42:09.2350869Z outputs = self.model.decoder( 2025-08-14T21:42:09.2351309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:09.2351744Z layer_outputs = decoder_layer( 2025-08-14T21:42:09.2352098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:09.2352466Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:09.2352909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:09.2353371Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:09.2353835Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:09.2354300Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:09.2354748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:09.2355223Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:09.2355418Z 2025-08-14T21:42:09.2355522Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:09.2355882Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:09.2356202Z return mod(**inputs) 2025-08-14T21:42:09.2356622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:42:09.2357066Z outputs = self.model.decoder( 2025-08-14T21:42:09.2357503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:09.2357935Z layer_outputs = decoder_layer( 2025-08-14T21:42:09.2358288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:09.2358650Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:09.2359099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:09.2359554Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:09.2360014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:09.2360478Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:09.2360904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:09.2361353Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:09.2361524Z 2025-08-14T21:42:09.2361604Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2361814Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2362040Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:09.2362398Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:09.2362721Z return mod(**inputs) 2025-08-14T21:42:09.2363124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:42:09.2363559Z outputs = self.model.decoder( 2025-08-14T21:42:09.2364028Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:09.2364460Z layer_outputs = decoder_layer( 2025-08-14T21:42:09.2364794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:09.2365207Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:09.2365636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 430, in forward 2025-08-14T21:42:09.2366106Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:42:09.2366477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:09.2366815Z return self.act(input) 2025-08-14T21:42:09.2366924Z 2025-08-14T21:42:09.2367013Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2367219Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2367436Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2367647Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2367854Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2368052Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2368273Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2368477Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2368702Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:09.2369058Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:09.2369384Z return mod(**inputs) 2025-08-14T21:42:09.2369788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:42:09.2370223Z outputs = self.model.decoder( 2025-08-14T21:42:09.2370652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:09.2371083Z layer_outputs = decoder_layer( 2025-08-14T21:42:09.2371415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:09.2371775Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:09.2372210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:09.2372662Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:09.2373129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:09.2373596Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:09.2374045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:09.2374530Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:09.2374713Z 2025-08-14T21:42:09.2374818Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:09.2375182Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:09.2375515Z return mod(**inputs) 2025-08-14T21:42:09.2375928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:42:09.2376379Z outputs = self.model.decoder( 2025-08-14T21:42:09.2376822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:09.2377294Z layer_outputs = decoder_layer( 2025-08-14T21:42:09.2377660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:09.2378119Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:09.2378615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:09.2379147Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:09.2379752Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:09.2380256Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:09.2380727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:09.2381208Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:09.2381382Z 2025-08-14T21:42:09.2381465Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2381682Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2381936Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:09.2382298Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:09.2382628Z return mod(**inputs) 2025-08-14T21:42:09.2383054Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:42:09.2383493Z outputs = self.model.decoder( 2025-08-14T21:42:09.2383932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:09.2384371Z layer_outputs = decoder_layer( 2025-08-14T21:42:09.2384727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:09.2385083Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:09.2385530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 430, in forward 2025-08-14T21:42:09.2386013Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:42:09.2386405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:09.2386745Z return self.act(input) 2025-08-14T21:42:09.2386864Z 2025-08-14T21:42:09.2386944Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2387158Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2387360Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2387566Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2387776Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2387979Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2388187Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2388391Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2388633Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:09.2388971Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:09.2389280Z return mod(**inputs) 2025-08-14T21:42:09.2389675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:42:09.2390092Z outputs = self.model.decoder( 2025-08-14T21:42:09.2390502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:09.2390916Z layer_outputs = decoder_layer( 2025-08-14T21:42:09.2391255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:09.2391601Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:09.2392090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:09.2392544Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:09.2392984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:09.2393470Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:09.2393890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:09.2394344Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:09.2394516Z 2025-08-14T21:42:09.2394615Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:09.2394959Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:09.2395270Z return mod(**inputs) 2025-08-14T21:42:09.2395672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:42:09.2396098Z outputs = self.model.decoder( 2025-08-14T21:42:09.2396522Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:09.2396953Z layer_outputs = decoder_layer( 2025-08-14T21:42:09.2397294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:09.2397640Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:09.2398074Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:09.2398517Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:09.2398959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:09.2399416Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:09.2399833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:09.2400268Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:09.2400422Z 2025-08-14T21:42:09.2400500Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2400702Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2400929Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:09.2401278Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:09.2401596Z return mod(**inputs) 2025-08-14T21:42:09.2402010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:42:09.2402450Z outputs = self.model.decoder( 2025-08-14T21:42:09.2402871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:09.2403298Z layer_outputs = decoder_layer( 2025-08-14T21:42:09.2403644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:09.2404006Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:09.2404440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 430, in forward 2025-08-14T21:42:09.2404926Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:42:09.2405317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:09.2405651Z return self.act(input) 2025-08-14T21:42:09.2405772Z 2025-08-14T21:42:09.2405900Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2406115Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2406322Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2406522Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2406728Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2406967Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2407163Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2407371Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2407605Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:09.2407959Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:09.2408285Z return mod(**inputs) 2025-08-14T21:42:09.2408703Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:42:09.2409144Z outputs = self.model.decoder( 2025-08-14T21:42:09.2409570Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:09.2410010Z layer_outputs = decoder_layer( 2025-08-14T21:42:09.2410361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:09.2410729Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:09.2411164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:09.2411629Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:09.2412093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:09.2412550Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:09.2412997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:09.2413477Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:09.2413661Z 2025-08-14T21:42:09.2413774Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:09.2414133Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:09.2414462Z return mod(**inputs) 2025-08-14T21:42:09.2414882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:42:09.2415325Z outputs = self.model.decoder( 2025-08-14T21:42:09.2415752Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:09.2416191Z layer_outputs = decoder_layer( 2025-08-14T21:42:09.2416558Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:09.2416933Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:09.2417400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:09.2417894Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:09.2418382Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:09.2418864Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:09.2419330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:09.2419906Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:09.2420080Z 2025-08-14T21:42:09.2420225Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2420447Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2420711Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:09.2421073Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:09.2421470Z return mod(**inputs) 2025-08-14T21:42:09.2421895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-08-14T21:42:09.2422365Z outputs = self.model.decoder( 2025-08-14T21:42:09.2422826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:09.2423283Z layer_outputs = decoder_layer( 2025-08-14T21:42:09.2423653Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:09.2424037Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:09.2424493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 430, in forward 2025-08-14T21:42:09.2425004Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:42:09.2425421Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:09.2425782Z return self.act(input) 2025-08-14T21:42:09.2425898Z 2025-08-14T21:42:09.2425981Z cudagraph partition due to non gpu ops 2025-08-14T21:42:09.2426231Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:09.2426611Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:09.2426956Z return mod(**inputs) 2025-08-14T21:42:09.2427388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1528, in forward 2025-08-14T21:42:09.2427863Z logits = self.lm_head(outputs[0]) 2025-08-14T21:42:09.2428003Z 2025-08-14T21:42:09.2428119Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:09.2428492Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:09.2428840Z return mod(**inputs) 2025-08-14T21:42:09.2429283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1534, in forward 2025-08-14T21:42:09.2429825Z loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1)) 2025-08-14T21:42:09.2430035Z 2025-08-14T21:42:17.8599507Z Compilation time (from dynamo_timed): 14.535297015 2025-08-14T21:42:17.8622807Z pass 2025-08-14T21:42:17.8623406Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:42:17.8624880Z TIMING: _recursive_pre_grad_passes:0.02797 _recursive_joint_graph_passes:0.32027 _recursive_post_grad_passes:0.05875 async_compile.wait:0.83497 code_gen:8.40974 inductor_compile:9.78448 backend_compile:12.86401 gc:0.00088 entire_frame_compile:14.5353 total_wall_time:14.5353 2025-08-14T21:42:17.8626561Z STATS: call_* op count: 252 | FakeTensorMode.__torch_dispatch__:16977 | FakeTensor.__torch_dispatch__:2714 | ProxyTorchDispatchMode.__torch_dispatch__:3847 2025-08-14T21:42:17.8627455Z Dynamo produced 1 graphs covering 252 ops with 0 graph breaks (0 unique) 2025-08-14T21:42:23.5816230Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:42:23.5817184Z from pkg_resources import resource_filename 2025-08-14T21:42:24.1836997Z 2025-08-14T21:42:25.3355051Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:42:25.3361338Z loading model: 0it [00:01, ?it/s] 2025-08-14T21:42:25.3375900Z cpu eval BlenderbotSmallForConditionalGeneration 2025-08-14T21:42:25.6124422Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:42:25.7189779Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:42:25.8283744Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:42:40.9498517Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9498834Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9499050Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9499316Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9499750Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9499970Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9500185Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9500418Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9500650Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9500860Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9501070Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9501280Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9501488Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9501694Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9501892Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9502139Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9502539Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9502871Z return mod(**inputs) 2025-08-14T21:42:40.9503336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9503815Z outputs = self.model( 2025-08-14T21:42:40.9504254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:40.9504698Z encoder_outputs = self.encoder( 2025-08-14T21:42:40.9505188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:40.9505655Z layer_outputs = encoder_layer( 2025-08-14T21:42:40.9506008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9506390Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9506842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-08-14T21:42:40.9507308Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:42:40.9507770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9508240Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9508692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:40.9509184Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:40.9509373Z 2025-08-14T21:42:40.9509483Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9509853Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9510186Z return mod(**inputs) 2025-08-14T21:42:40.9510612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9511047Z outputs = self.model( 2025-08-14T21:42:40.9511876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:40.9512334Z encoder_outputs = self.encoder( 2025-08-14T21:42:40.9512780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:40.9513340Z layer_outputs = encoder_layer( 2025-08-14T21:42:40.9513700Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9514070Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9514505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-08-14T21:42:40.9514964Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:42:40.9515422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9515884Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9516318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:40.9516782Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:40.9516946Z 2025-08-14T21:42:40.9517037Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9517247Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9517497Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9517864Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9518193Z return mod(**inputs) 2025-08-14T21:42:40.9518605Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9519046Z outputs = self.model( 2025-08-14T21:42:40.9519460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:40.9519888Z encoder_outputs = self.encoder( 2025-08-14T21:42:40.9520327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:40.9520763Z layer_outputs = encoder_layer( 2025-08-14T21:42:40.9521124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9521477Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9521915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 307, in forward 2025-08-14T21:42:40.9522402Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:42:40.9522810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:40.9523137Z return self.act(input) 2025-08-14T21:42:40.9523253Z 2025-08-14T21:42:40.9523332Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9523546Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9523746Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9523949Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9524150Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9524345Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9524551Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9524753Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9524985Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9525334Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9525658Z return mod(**inputs) 2025-08-14T21:42:40.9526136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9526564Z outputs = self.model( 2025-08-14T21:42:40.9526977Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:40.9527462Z encoder_outputs = self.encoder( 2025-08-14T21:42:40.9527893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:40.9528321Z layer_outputs = encoder_layer( 2025-08-14T21:42:40.9528682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9529054Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9529504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-08-14T21:42:40.9529960Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:42:40.9530406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9530867Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9531305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:40.9531790Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:40.9531984Z 2025-08-14T21:42:40.9532091Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9532460Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9532784Z return mod(**inputs) 2025-08-14T21:42:40.9533211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9533650Z outputs = self.model( 2025-08-14T21:42:40.9534075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:40.9534513Z encoder_outputs = self.encoder( 2025-08-14T21:42:40.9534953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:40.9535404Z layer_outputs = encoder_layer( 2025-08-14T21:42:40.9535752Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9536119Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9536568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-08-14T21:42:40.9537036Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:42:40.9537488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9537988Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9538458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:40.9538943Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:40.9539116Z 2025-08-14T21:42:40.9539207Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9539532Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9539829Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9540209Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9540609Z return mod(**inputs) 2025-08-14T21:42:40.9541054Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9541493Z outputs = self.model( 2025-08-14T21:42:40.9542242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:40.9542728Z encoder_outputs = self.encoder( 2025-08-14T21:42:40.9543190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:40.9543665Z layer_outputs = encoder_layer( 2025-08-14T21:42:40.9544035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9544408Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9544864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 307, in forward 2025-08-14T21:42:40.9545344Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:42:40.9545738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:40.9546089Z return self.act(input) 2025-08-14T21:42:40.9546200Z 2025-08-14T21:42:40.9546288Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9546493Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9546704Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9546921Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9547133Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9547355Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9547580Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9547781Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9548020Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9548388Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9548724Z return mod(**inputs) 2025-08-14T21:42:40.9549173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9549653Z outputs = self.model( 2025-08-14T21:42:40.9550095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:40.9550559Z encoder_outputs = self.encoder( 2025-08-14T21:42:40.9551006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:40.9551467Z layer_outputs = encoder_layer( 2025-08-14T21:42:40.9551838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9552214Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9552711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-08-14T21:42:40.9553191Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:42:40.9553664Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9554150Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9554614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:40.9555089Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:40.9555283Z 2025-08-14T21:42:40.9555394Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9555860Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9556197Z return mod(**inputs) 2025-08-14T21:42:40.9556613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9557118Z outputs = self.model( 2025-08-14T21:42:40.9557581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:40.9558034Z encoder_outputs = self.encoder( 2025-08-14T21:42:40.9558467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:40.9558924Z layer_outputs = encoder_layer( 2025-08-14T21:42:40.9559303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9559699Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9560141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-08-14T21:42:40.9560602Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:42:40.9561074Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9561552Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9562018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:40.9562503Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:40.9562736Z 2025-08-14T21:42:40.9562843Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9563091Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9563340Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9563725Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9564081Z return mod(**inputs) 2025-08-14T21:42:40.9564533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9565004Z outputs = self.model( 2025-08-14T21:42:40.9565465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:40.9565933Z encoder_outputs = self.encoder( 2025-08-14T21:42:40.9566389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:40.9566844Z layer_outputs = encoder_layer( 2025-08-14T21:42:40.9567218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9567601Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9568064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 307, in forward 2025-08-14T21:42:40.9568605Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:42:40.9569020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:40.9569382Z return self.act(input) 2025-08-14T21:42:40.9569499Z 2025-08-14T21:42:40.9569584Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9570028Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9570250Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9570462Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9570683Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9570947Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9571172Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9571387Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9571641Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9572065Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9572409Z return mod(**inputs) 2025-08-14T21:42:40.9572854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9573317Z outputs = self.model( 2025-08-14T21:42:40.9573752Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:40.9574218Z encoder_outputs = self.encoder( 2025-08-14T21:42:40.9574679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:40.9575154Z layer_outputs = encoder_layer( 2025-08-14T21:42:40.9575526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9575927Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9576406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-08-14T21:42:40.9576903Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:42:40.9577373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9577861Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9578342Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:40.9578859Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:40.9579061Z 2025-08-14T21:42:40.9579177Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9579669Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9580047Z return mod(**inputs) 2025-08-14T21:42:40.9580504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9580986Z outputs = self.model( 2025-08-14T21:42:40.9581452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:40.9581886Z encoder_outputs = self.encoder( 2025-08-14T21:42:40.9582312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:40.9582749Z layer_outputs = encoder_layer( 2025-08-14T21:42:40.9583099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9583489Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9583958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-08-14T21:42:40.9584462Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:42:40.9584958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9585446Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9585901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:40.9586408Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:40.9586571Z 2025-08-14T21:42:40.9586658Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9586863Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9587146Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9587547Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9587899Z return mod(**inputs) 2025-08-14T21:42:40.9588358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9588836Z outputs = self.model( 2025-08-14T21:42:40.9589301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:40.9589787Z encoder_outputs = self.encoder( 2025-08-14T21:42:40.9590264Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:40.9590730Z layer_outputs = encoder_layer( 2025-08-14T21:42:40.9591099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9591450Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9591886Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 307, in forward 2025-08-14T21:42:40.9592370Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:42:40.9592763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:40.9593133Z return self.act(input) 2025-08-14T21:42:40.9593258Z 2025-08-14T21:42:40.9593344Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9593573Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9593793Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9594015Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9594235Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9594448Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9594672Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9594893Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9595138Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9595531Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9595894Z return mod(**inputs) 2025-08-14T21:42:40.9596350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9596832Z outputs = self.model( 2025-08-14T21:42:40.9597287Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:40.9597762Z encoder_outputs = self.encoder( 2025-08-14T21:42:40.9598232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:40.9598713Z layer_outputs = encoder_layer( 2025-08-14T21:42:40.9599097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9599500Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9599978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-08-14T21:42:40.9600480Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:42:40.9600979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9601532Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9602018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:40.9602580Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:40.9602785Z 2025-08-14T21:42:40.9602900Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9603301Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9603659Z return mod(**inputs) 2025-08-14T21:42:40.9604117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9604591Z outputs = self.model( 2025-08-14T21:42:40.9605033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:40.9605516Z encoder_outputs = self.encoder( 2025-08-14T21:42:40.9605992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:40.9606460Z layer_outputs = encoder_layer( 2025-08-14T21:42:40.9606825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9607211Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9607678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-08-14T21:42:40.9608165Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:42:40.9608638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9609130Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9609599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:40.9610078Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:40.9610264Z 2025-08-14T21:42:40.9610352Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9610582Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9610839Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9611226Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9611580Z return mod(**inputs) 2025-08-14T21:42:40.9612024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9612488Z outputs = self.model( 2025-08-14T21:42:40.9612934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:40.9613405Z encoder_outputs = self.encoder( 2025-08-14T21:42:40.9613877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:40.9614357Z layer_outputs = encoder_layer( 2025-08-14T21:42:40.9614745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9615157Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9615658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 307, in forward 2025-08-14T21:42:40.9616195Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:42:40.9616681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:40.9617058Z return self.act(input) 2025-08-14T21:42:40.9617177Z 2025-08-14T21:42:40.9617268Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9617500Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9617765Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9617998Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9618218Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9618445Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9618671Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9618889Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9619148Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9619645Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9620007Z return mod(**inputs) 2025-08-14T21:42:40.9620478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9620974Z outputs = self.model( 2025-08-14T21:42:40.9621445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:40.9621934Z encoder_outputs = self.encoder( 2025-08-14T21:42:40.9622408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:40.9622896Z layer_outputs = encoder_layer( 2025-08-14T21:42:40.9623282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9623701Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9624202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-08-14T21:42:40.9624714Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:42:40.9625209Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9625724Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9626207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:40.9626735Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:40.9626946Z 2025-08-14T21:42:40.9627061Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9627459Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9627823Z return mod(**inputs) 2025-08-14T21:42:40.9628298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9628792Z outputs = self.model( 2025-08-14T21:42:40.9629265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:40.9629752Z encoder_outputs = self.encoder( 2025-08-14T21:42:40.9630220Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:40.9630707Z layer_outputs = encoder_layer( 2025-08-14T21:42:40.9631088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9631485Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9631979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-08-14T21:42:40.9632540Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:42:40.9633050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9633566Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9634105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:40.9634558Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:40.9634725Z 2025-08-14T21:42:40.9634820Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9635031Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9635269Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9635633Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9635953Z return mod(**inputs) 2025-08-14T21:42:40.9636374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9636835Z outputs = self.model( 2025-08-14T21:42:40.9637291Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:40.9637764Z encoder_outputs = self.encoder( 2025-08-14T21:42:40.9638232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:40.9638710Z layer_outputs = encoder_layer( 2025-08-14T21:42:40.9639086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9639449Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9639916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 307, in forward 2025-08-14T21:42:40.9640445Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:42:40.9640865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:40.9641246Z return self.act(input) 2025-08-14T21:42:40.9641365Z 2025-08-14T21:42:40.9641461Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9641690Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9642131Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9642367Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9642590Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9642803Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9643032Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9643260Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9643507Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9643917Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9644278Z return mod(**inputs) 2025-08-14T21:42:40.9644736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9645218Z outputs = self.model( 2025-08-14T21:42:40.9645681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:40.9646167Z encoder_outputs = self.encoder( 2025-08-14T21:42:40.9646638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:40.9647130Z layer_outputs = encoder_layer( 2025-08-14T21:42:40.9647501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9647988Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9648427Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-08-14T21:42:40.9648901Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:42:40.9649441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9649933Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9650396Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:40.9650913Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:40.9651115Z 2025-08-14T21:42:40.9651242Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9651637Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9652007Z return mod(**inputs) 2025-08-14T21:42:40.9652469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9652950Z outputs = self.model( 2025-08-14T21:42:40.9653408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:40.9653902Z encoder_outputs = self.encoder( 2025-08-14T21:42:40.9654386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:40.9654851Z layer_outputs = encoder_layer( 2025-08-14T21:42:40.9655233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9655628Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9656117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-08-14T21:42:40.9656609Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:42:40.9657118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9657627Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9658119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:40.9658617Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:40.9658803Z 2025-08-14T21:42:40.9658892Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9659127Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9659390Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9659867Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9660229Z return mod(**inputs) 2025-08-14T21:42:40.9660690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9661167Z outputs = self.model( 2025-08-14T21:42:40.9661616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:40.9662087Z encoder_outputs = self.encoder( 2025-08-14T21:42:40.9662556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:40.9663024Z layer_outputs = encoder_layer( 2025-08-14T21:42:40.9663406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9663850Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9664314Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 307, in forward 2025-08-14T21:42:40.9664866Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:42:40.9665288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:40.9665656Z return self.act(input) 2025-08-14T21:42:40.9665775Z 2025-08-14T21:42:40.9665867Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9666104Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9666335Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9666554Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9666783Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9667021Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9667233Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9667437Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9667673Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9668035Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9668362Z return mod(**inputs) 2025-08-14T21:42:40.9668780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9669216Z outputs = self.model( 2025-08-14T21:42:40.9669635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:40.9670066Z encoder_outputs = self.encoder( 2025-08-14T21:42:40.9670498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:40.9670938Z layer_outputs = encoder_layer( 2025-08-14T21:42:40.9671284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9671647Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9672083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-08-14T21:42:40.9672539Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:42:40.9672986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9673448Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9673889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:40.9674365Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:40.9674550Z 2025-08-14T21:42:40.9674656Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9675017Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9675350Z return mod(**inputs) 2025-08-14T21:42:40.9675761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9676192Z outputs = self.model( 2025-08-14T21:42:40.9676614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:40.9677057Z encoder_outputs = self.encoder( 2025-08-14T21:42:40.9677483Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:40.9677966Z layer_outputs = encoder_layer( 2025-08-14T21:42:40.9678316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9678675Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9679137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-08-14T21:42:40.9679589Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:42:40.9680035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9680482Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9680921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:40.9681369Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:40.9681530Z 2025-08-14T21:42:40.9681624Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9681833Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9682071Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9682441Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9682784Z return mod(**inputs) 2025-08-14T21:42:40.9683205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9683652Z outputs = self.model( 2025-08-14T21:42:40.9684082Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-08-14T21:42:40.9684528Z encoder_outputs = self.encoder( 2025-08-14T21:42:40.9684975Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-08-14T21:42:40.9685426Z layer_outputs = encoder_layer( 2025-08-14T21:42:40.9685786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9686156Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9686605Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 307, in forward 2025-08-14T21:42:40.9687096Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:42:40.9687493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:40.9687832Z return self.act(input) 2025-08-14T21:42:40.9687953Z 2025-08-14T21:42:40.9688033Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9688254Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9688458Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9688675Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9688888Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9689095Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9689305Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9689519Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9689752Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9690124Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9690459Z return mod(**inputs) 2025-08-14T21:42:40.9690887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9691323Z outputs = self.model( 2025-08-14T21:42:40.9691750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:40.9692253Z decoder_outputs = self.decoder( 2025-08-14T21:42:40.9692694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:40.9693138Z layer_outputs = decoder_layer( 2025-08-14T21:42:40.9693540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9693909Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9694353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:40.9694829Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:40.9695306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9695772Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9696215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:40.9696706Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:40.9696902Z 2025-08-14T21:42:40.9697007Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9697371Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9697691Z return mod(**inputs) 2025-08-14T21:42:40.9698123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9698567Z outputs = self.model( 2025-08-14T21:42:40.9698977Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:40.9699530Z decoder_outputs = self.decoder( 2025-08-14T21:42:40.9700030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:40.9700517Z layer_outputs = decoder_layer( 2025-08-14T21:42:40.9700904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9701315Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9701808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:40.9702339Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:40.9702848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9703364Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9703855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:40.9704362Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:40.9704538Z 2025-08-14T21:42:40.9704632Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9704868Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9705096Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9705314Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9705538Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9705763Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9705979Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9706202Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9706457Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9706862Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9707225Z return mod(**inputs) 2025-08-14T21:42:40.9707643Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9708080Z outputs = self.model( 2025-08-14T21:42:40.9708513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:40.9708960Z decoder_outputs = self.decoder( 2025-08-14T21:42:40.9709387Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:40.9709817Z layer_outputs = decoder_layer( 2025-08-14T21:42:40.9710148Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9710504Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9710938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-08-14T21:42:40.9711401Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:42:40.9711857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9712310Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9712744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:40.9713198Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:40.9713385Z 2025-08-14T21:42:40.9713488Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9713840Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9714165Z return mod(**inputs) 2025-08-14T21:42:40.9714570Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9714996Z outputs = self.model( 2025-08-14T21:42:40.9715407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:40.9715838Z decoder_outputs = self.decoder( 2025-08-14T21:42:40.9716253Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:40.9716683Z layer_outputs = decoder_layer( 2025-08-14T21:42:40.9717034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9717380Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9717810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-08-14T21:42:40.9718271Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:42:40.9718731Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9719178Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9719612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:40.9720054Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:40.9720210Z 2025-08-14T21:42:40.9720296Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9720498Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9720726Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9721129Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9721445Z return mod(**inputs) 2025-08-14T21:42:40.9721852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9722335Z outputs = self.model( 2025-08-14T21:42:40.9722738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:40.9723155Z decoder_outputs = self.decoder( 2025-08-14T21:42:40.9723574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:40.9723993Z layer_outputs = decoder_layer( 2025-08-14T21:42:40.9724324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9724682Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9725105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 430, in forward 2025-08-14T21:42:40.9725567Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:42:40.9725939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:40.9726265Z return self.act(input) 2025-08-14T21:42:40.9726379Z 2025-08-14T21:42:40.9726459Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9726665Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9726859Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9727061Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9727263Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9727459Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9727670Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9727883Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9728110Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9728474Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9728818Z return mod(**inputs) 2025-08-14T21:42:40.9729222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9729641Z outputs = self.model( 2025-08-14T21:42:40.9730051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:40.9730480Z decoder_outputs = self.decoder( 2025-08-14T21:42:40.9730890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:40.9731313Z layer_outputs = decoder_layer( 2025-08-14T21:42:40.9731659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9732011Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9732433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:40.9732901Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:40.9733357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9733812Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9734245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:40.9734715Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:40.9734896Z 2025-08-14T21:42:40.9735059Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9735414Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9735735Z return mod(**inputs) 2025-08-14T21:42:40.9736186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9736608Z outputs = self.model( 2025-08-14T21:42:40.9737008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:40.9737437Z decoder_outputs = self.decoder( 2025-08-14T21:42:40.9737865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:40.9738351Z layer_outputs = decoder_layer( 2025-08-14T21:42:40.9738694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9739065Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9739584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:40.9740057Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:40.9740552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9741054Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9741530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:40.9742259Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:40.9742428Z 2025-08-14T21:42:40.9742511Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9742727Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9742944Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9743150Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9743359Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9743581Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9743783Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9743990Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9744232Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9744592Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9744925Z return mod(**inputs) 2025-08-14T21:42:40.9745351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9745815Z outputs = self.model( 2025-08-14T21:42:40.9746260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:40.9746734Z decoder_outputs = self.decoder( 2025-08-14T21:42:40.9747199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:40.9747649Z layer_outputs = decoder_layer( 2025-08-14T21:42:40.9748000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9748362Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9748810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-08-14T21:42:40.9749284Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:42:40.9749902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9750379Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9750819Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:40.9751341Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:40.9751528Z 2025-08-14T21:42:40.9751635Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9751994Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9752322Z return mod(**inputs) 2025-08-14T21:42:40.9752726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9753152Z outputs = self.model( 2025-08-14T21:42:40.9753567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:40.9753991Z decoder_outputs = self.decoder( 2025-08-14T21:42:40.9754458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:40.9754906Z layer_outputs = decoder_layer( 2025-08-14T21:42:40.9755259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9755617Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9756063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-08-14T21:42:40.9756540Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:42:40.9757016Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9757481Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9757928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:40.9758387Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:40.9758547Z 2025-08-14T21:42:40.9758628Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9758843Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9759079Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9759443Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9759766Z return mod(**inputs) 2025-08-14T21:42:40.9760185Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9760624Z outputs = self.model( 2025-08-14T21:42:40.9761034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:40.9761493Z decoder_outputs = self.decoder( 2025-08-14T21:42:40.9761958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:40.9762399Z layer_outputs = decoder_layer( 2025-08-14T21:42:40.9762745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9763110Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9763557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 430, in forward 2025-08-14T21:42:40.9764046Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:42:40.9764474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:40.9764821Z return self.act(input) 2025-08-14T21:42:40.9764929Z 2025-08-14T21:42:40.9765017Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9765258Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9765469Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9765680Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9765892Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9766093Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9766302Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9766510Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9766743Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9767112Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9767447Z return mod(**inputs) 2025-08-14T21:42:40.9767865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9768313Z outputs = self.model( 2025-08-14T21:42:40.9768733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:40.9769193Z decoder_outputs = self.decoder( 2025-08-14T21:42:40.9769621Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:40.9770065Z layer_outputs = decoder_layer( 2025-08-14T21:42:40.9770423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9770789Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9771233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:40.9771710Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:40.9772175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9772637Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9773079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:40.9773560Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:40.9773746Z 2025-08-14T21:42:40.9773861Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9774219Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9774548Z return mod(**inputs) 2025-08-14T21:42:40.9774970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9775410Z outputs = self.model( 2025-08-14T21:42:40.9775820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:40.9776265Z decoder_outputs = self.decoder( 2025-08-14T21:42:40.9776702Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:40.9777148Z layer_outputs = decoder_layer( 2025-08-14T21:42:40.9777503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9777868Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9778356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:40.9778818Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:40.9779280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9779988Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9780460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:40.9780932Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:40.9781111Z 2025-08-14T21:42:40.9781198Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9781423Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9781640Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9781860Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9782083Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9782311Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9782513Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9782720Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9782958Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9783318Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9783667Z return mod(**inputs) 2025-08-14T21:42:40.9784112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9784572Z outputs = self.model( 2025-08-14T21:42:40.9785025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:40.9785493Z decoder_outputs = self.decoder( 2025-08-14T21:42:40.9785959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:40.9786415Z layer_outputs = decoder_layer( 2025-08-14T21:42:40.9786788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9787173Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9787637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-08-14T21:42:40.9788123Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:42:40.9788626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9789111Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9789573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:40.9790071Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:40.9790270Z 2025-08-14T21:42:40.9790381Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9790769Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9791107Z return mod(**inputs) 2025-08-14T21:42:40.9791541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9791986Z outputs = self.model( 2025-08-14T21:42:40.9792405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:40.9792838Z decoder_outputs = self.decoder( 2025-08-14T21:42:40.9793316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:40.9793761Z layer_outputs = decoder_layer( 2025-08-14T21:42:40.9794104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9794503Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9794946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-08-14T21:42:40.9795419Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:42:40.9795879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9796340Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9796783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:40.9797238Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:40.9797399Z 2025-08-14T21:42:40.9797480Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9797691Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9797934Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9798287Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9798615Z return mod(**inputs) 2025-08-14T21:42:40.9799032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9799467Z outputs = self.model( 2025-08-14T21:42:40.9799877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:40.9800319Z decoder_outputs = self.decoder( 2025-08-14T21:42:40.9800757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:40.9801196Z layer_outputs = decoder_layer( 2025-08-14T21:42:40.9801541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9801906Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9802351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 430, in forward 2025-08-14T21:42:40.9802826Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:42:40.9803219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:40.9803572Z return self.act(input) 2025-08-14T21:42:40.9803680Z 2025-08-14T21:42:40.9803769Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9803982Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9804196Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9804406Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9804609Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9804820Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9805032Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9805234Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9805471Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9805834Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9806162Z return mod(**inputs) 2025-08-14T21:42:40.9806576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9807042Z outputs = self.model( 2025-08-14T21:42:40.9807550Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:40.9807990Z decoder_outputs = self.decoder( 2025-08-14T21:42:40.9808445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:40.9808960Z layer_outputs = decoder_layer( 2025-08-14T21:42:40.9809335Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9809716Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9810186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:40.9810689Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:40.9811187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9811680Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9812151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:40.9812659Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:40.9812861Z 2025-08-14T21:42:40.9812979Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9813356Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9813702Z return mod(**inputs) 2025-08-14T21:42:40.9814144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9814597Z outputs = self.model( 2025-08-14T21:42:40.9815050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:40.9815522Z decoder_outputs = self.decoder( 2025-08-14T21:42:40.9815982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:40.9816450Z layer_outputs = decoder_layer( 2025-08-14T21:42:40.9816823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9817210Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9817674Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:40.9818156Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:40.9818647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9819134Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9819685Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:40.9820188Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:40.9820372Z 2025-08-14T21:42:40.9820472Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9820700Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9820917Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9821139Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9821362Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9821577Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9821795Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9822015Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9822259Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9822695Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9823054Z return mod(**inputs) 2025-08-14T21:42:40.9823472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9823942Z outputs = self.model( 2025-08-14T21:42:40.9824359Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:40.9824798Z decoder_outputs = self.decoder( 2025-08-14T21:42:40.9825235Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:40.9825669Z layer_outputs = decoder_layer( 2025-08-14T21:42:40.9826021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9826387Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9826820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-08-14T21:42:40.9827295Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:42:40.9827767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9828229Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9828663Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:40.9829138Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:40.9829329Z 2025-08-14T21:42:40.9829435Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9829799Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9830119Z return mod(**inputs) 2025-08-14T21:42:40.9830534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9830976Z outputs = self.model( 2025-08-14T21:42:40.9831386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:40.9831827Z decoder_outputs = self.decoder( 2025-08-14T21:42:40.9832260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:40.9832697Z layer_outputs = decoder_layer( 2025-08-14T21:42:40.9833038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9833418Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9833890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-08-14T21:42:40.9834355Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:42:40.9834818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9835283Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9835727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:40.9836177Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:40.9836339Z 2025-08-14T21:42:40.9836420Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9836633Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9836913Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9837274Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9837604Z return mod(**inputs) 2025-08-14T21:42:40.9838029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9838504Z outputs = self.model( 2025-08-14T21:42:40.9838913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:40.9839351Z decoder_outputs = self.decoder( 2025-08-14T21:42:40.9839782Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:40.9840207Z layer_outputs = decoder_layer( 2025-08-14T21:42:40.9840562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9840926Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9841369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 430, in forward 2025-08-14T21:42:40.9842040Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:42:40.9842451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:40.9842812Z return self.act(input) 2025-08-14T21:42:40.9842922Z 2025-08-14T21:42:40.9843015Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9843223Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9843440Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9843651Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9843853Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9844066Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9844276Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9844476Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9844714Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9845088Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9845411Z return mod(**inputs) 2025-08-14T21:42:40.9845833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9846268Z outputs = self.model( 2025-08-14T21:42:40.9846689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:40.9847122Z decoder_outputs = self.decoder( 2025-08-14T21:42:40.9847566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:40.9848006Z layer_outputs = decoder_layer( 2025-08-14T21:42:40.9848362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9848721Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9849163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:40.9849631Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:40.9850088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9850554Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9850992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:40.9851554Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:40.9851747Z 2025-08-14T21:42:40.9851859Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9852253Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9852668Z return mod(**inputs) 2025-08-14T21:42:40.9853119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9853585Z outputs = self.model( 2025-08-14T21:42:40.9854033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:40.9854503Z decoder_outputs = self.decoder( 2025-08-14T21:42:40.9854971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:40.9855432Z layer_outputs = decoder_layer( 2025-08-14T21:42:40.9855811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9856204Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9856680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:40.9857181Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:40.9857676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9858171Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9858629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:40.9859113Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:40.9859283Z 2025-08-14T21:42:40.9859380Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9859676Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9859899Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9860131Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9860354Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9860572Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9860802Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9861008Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9861238Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9861607Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9861941Z return mod(**inputs) 2025-08-14T21:42:40.9862367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9862801Z outputs = self.model( 2025-08-14T21:42:40.9863226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:40.9863677Z decoder_outputs = self.decoder( 2025-08-14T21:42:40.9864132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:40.9864596Z layer_outputs = decoder_layer( 2025-08-14T21:42:40.9864957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9865346Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9865813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-08-14T21:42:40.9866374Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:42:40.9866874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9867360Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9867859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:40.9868363Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:40.9868554Z 2025-08-14T21:42:40.9868679Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9869034Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9869364Z return mod(**inputs) 2025-08-14T21:42:40.9869780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9870223Z outputs = self.model( 2025-08-14T21:42:40.9870634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:40.9871081Z decoder_outputs = self.decoder( 2025-08-14T21:42:40.9871516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:40.9871956Z layer_outputs = decoder_layer( 2025-08-14T21:42:40.9872300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9872665Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9873108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-08-14T21:42:40.9873574Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:42:40.9874048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9874515Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9874960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:40.9875409Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:40.9875578Z 2025-08-14T21:42:40.9875660Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9875876Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9876115Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9876472Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9876807Z return mod(**inputs) 2025-08-14T21:42:40.9877232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9877671Z outputs = self.model( 2025-08-14T21:42:40.9878092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:40.9878539Z decoder_outputs = self.decoder( 2025-08-14T21:42:40.9878981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:40.9879415Z layer_outputs = decoder_layer( 2025-08-14T21:42:40.9879766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9880131Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9880562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 430, in forward 2025-08-14T21:42:40.9881088Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:42:40.9881480Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:40.9881856Z return self.act(input) 2025-08-14T21:42:40.9881966Z 2025-08-14T21:42:40.9882046Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9882262Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9882470Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9882671Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9882882Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9883090Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9883295Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9883493Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9883729Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9884094Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9884415Z return mod(**inputs) 2025-08-14T21:42:40.9884835Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9885269Z outputs = self.model( 2025-08-14T21:42:40.9885688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:40.9886120Z decoder_outputs = self.decoder( 2025-08-14T21:42:40.9886549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:40.9886995Z layer_outputs = decoder_layer( 2025-08-14T21:42:40.9887343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9887708Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9888148Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:40.9888629Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:40.9889089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9889551Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9889992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:40.9890481Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:40.9890665Z 2025-08-14T21:42:40.9890769Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9891135Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9891469Z return mod(**inputs) 2025-08-14T21:42:40.9891879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9892316Z outputs = self.model( 2025-08-14T21:42:40.9892730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:40.9893171Z decoder_outputs = self.decoder( 2025-08-14T21:42:40.9893598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:40.9894033Z layer_outputs = decoder_layer( 2025-08-14T21:42:40.9894383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9894743Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9895224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:40.9895707Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:40.9896214Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9896666Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9897101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:40.9897555Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:40.9897717Z 2025-08-14T21:42:40.9897806Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9898016Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9898227Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9898442Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9898643Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9898851Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9899059Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9899277Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9899590Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9899984Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9900329Z return mod(**inputs) 2025-08-14T21:42:40.9900770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9901212Z outputs = self.model( 2025-08-14T21:42:40.9901636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:40.9902083Z decoder_outputs = self.decoder( 2025-08-14T21:42:40.9902515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:40.9902955Z layer_outputs = decoder_layer( 2025-08-14T21:42:40.9903312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9903666Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9904103Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-08-14T21:42:40.9904583Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:42:40.9905044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9905483Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9905919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:40.9906383Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:40.9906564Z 2025-08-14T21:42:40.9906673Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9907019Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9907339Z return mod(**inputs) 2025-08-14T21:42:40.9907745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9908171Z outputs = self.model( 2025-08-14T21:42:40.9908572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:40.9909000Z decoder_outputs = self.decoder( 2025-08-14T21:42:40.9909473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:40.9909895Z layer_outputs = decoder_layer( 2025-08-14T21:42:40.9910290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9910643Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9911074Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-08-14T21:42:40.9911524Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:42:40.9911992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9912444Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9912877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:40.9913311Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:40.9913477Z 2025-08-14T21:42:40.9913559Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9913776Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9913998Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9914355Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9914673Z return mod(**inputs) 2025-08-14T21:42:40.9915080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9915496Z outputs = self.model( 2025-08-14T21:42:40.9915908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:40.9916337Z decoder_outputs = self.decoder( 2025-08-14T21:42:40.9916753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:40.9917181Z layer_outputs = decoder_layer( 2025-08-14T21:42:40.9917523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9917874Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9918294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 430, in forward 2025-08-14T21:42:40.9918767Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:42:40.9919147Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:40.9919480Z return self.act(input) 2025-08-14T21:42:40.9919588Z 2025-08-14T21:42:40.9919665Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9919872Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9920073Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9920267Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9920470Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9920670Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9920863Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9921066Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9921295Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9921650Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9921965Z return mod(**inputs) 2025-08-14T21:42:40.9922381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9922854Z outputs = self.model( 2025-08-14T21:42:40.9923259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:40.9923691Z decoder_outputs = self.decoder( 2025-08-14T21:42:40.9924155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:40.9924585Z layer_outputs = decoder_layer( 2025-08-14T21:42:40.9924927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9925284Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9925722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:40.9926171Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:40.9926621Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9927074Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9927510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:40.9927968Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:40.9928156Z 2025-08-14T21:42:40.9928257Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9928613Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9928935Z return mod(**inputs) 2025-08-14T21:42:40.9929350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9929779Z outputs = self.model( 2025-08-14T21:42:40.9930196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:40.9930628Z decoder_outputs = self.decoder( 2025-08-14T21:42:40.9931053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:40.9931479Z layer_outputs = decoder_layer( 2025-08-14T21:42:40.9931822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9932170Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9932601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:40.9933068Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:40.9933523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9933967Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9934397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:40.9934841Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:40.9935003Z 2025-08-14T21:42:40.9935093Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9935304Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9935515Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9935726Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9935925Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9936133Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9936338Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9936538Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9936813Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9937186Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9937510Z return mod(**inputs) 2025-08-14T21:42:40.9937957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9938395Z outputs = self.model( 2025-08-14T21:42:40.9938815Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:40.9939298Z decoder_outputs = self.decoder( 2025-08-14T21:42:40.9939834Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:40.9940306Z layer_outputs = decoder_layer( 2025-08-14T21:42:40.9940682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9941064Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9941493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-08-14T21:42:40.9942130Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:42:40.9942593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9943037Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9943468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:40.9943937Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:40.9944118Z 2025-08-14T21:42:40.9944233Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9944579Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9944898Z return mod(**inputs) 2025-08-14T21:42:40.9945309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9945763Z outputs = self.model( 2025-08-14T21:42:40.9946170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:40.9946601Z decoder_outputs = self.decoder( 2025-08-14T21:42:40.9947029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:40.9947447Z layer_outputs = decoder_layer( 2025-08-14T21:42:40.9947791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9948144Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9948576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-08-14T21:42:40.9949031Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:42:40.9949489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9949934Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9950357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:40.9950798Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:40.9950959Z 2025-08-14T21:42:40.9951039Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9951338Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9951568Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9951922Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9952294Z return mod(**inputs) 2025-08-14T21:42:40.9952695Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9953121Z outputs = self.model( 2025-08-14T21:42:40.9953530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:40.9953959Z decoder_outputs = self.decoder( 2025-08-14T21:42:40.9954375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:40.9954804Z layer_outputs = decoder_layer( 2025-08-14T21:42:40.9955142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9955490Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9955910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 430, in forward 2025-08-14T21:42:40.9956034Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:42:40.9956245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:40.9956312Z return self.act(input) 2025-08-14T21:42:40.9956315Z 2025-08-14T21:42:40.9956401Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9956477Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9956556Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9956628Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9956704Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9956782Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9956853Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9956924Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9957036Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9957231Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9957295Z return mod(**inputs) 2025-08-14T21:42:40.9957597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9957672Z outputs = self.model( 2025-08-14T21:42:40.9957968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:40.9958038Z decoder_outputs = self.decoder( 2025-08-14T21:42:40.9958326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:40.9958403Z layer_outputs = decoder_layer( 2025-08-14T21:42:40.9958609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9958696Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9958981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:40.9959077Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:40.9959365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9959459Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9959779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:40.9959911Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:40.9959915Z 2025-08-14T21:42:40.9960085Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9960280Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9960345Z return mod(**inputs) 2025-08-14T21:42:40.9960633Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9960707Z outputs = self.model( 2025-08-14T21:42:40.9960999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:40.9961077Z decoder_outputs = self.decoder( 2025-08-14T21:42:40.9961367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:40.9961437Z layer_outputs = decoder_layer( 2025-08-14T21:42:40.9961652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9961732Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9962025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-08-14T21:42:40.9962130Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:42:40.9962422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9962523Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9962807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:40.9962911Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:40.9962923Z 2025-08-14T21:42:40.9963000Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9963081Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9963164Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9963241Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9963313Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9963394Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9963466Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9963538Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9963648Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9963841Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9963904Z return mod(**inputs) 2025-08-14T21:42:40.9964210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9964276Z outputs = self.model( 2025-08-14T21:42:40.9964578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:40.9964654Z decoder_outputs = self.decoder( 2025-08-14T21:42:40.9964947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:40.9965025Z layer_outputs = decoder_layer( 2025-08-14T21:42:40.9965239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9965323Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9966512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-08-14T21:42:40.9966628Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:42:40.9966929Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9967061Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9967350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:42:40.9967476Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:42:40.9967479Z 2025-08-14T21:42:40.9967578Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9967779Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9967844Z return mod(**inputs) 2025-08-14T21:42:40.9968150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9968224Z outputs = self.model( 2025-08-14T21:42:40.9968510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:40.9968590Z decoder_outputs = self.decoder( 2025-08-14T21:42:40.9968877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:40.9968945Z layer_outputs = decoder_layer( 2025-08-14T21:42:40.9969162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9969239Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9969533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-08-14T21:42:40.9969636Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:42:40.9969922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-08-14T21:42:40.9970024Z attn_output, attn_weights = attention_interface( 2025-08-14T21:42:40.9970291Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:42:40.9970392Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:42:40.9970403Z 2025-08-14T21:42:40.9970481Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9970555Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9970657Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9970841Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9970905Z return mod(**inputs) 2025-08-14T21:42:40.9971200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-08-14T21:42:40.9971264Z outputs = self.model( 2025-08-14T21:42:40.9971558Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-08-14T21:42:40.9971627Z decoder_outputs = self.decoder( 2025-08-14T21:42:40.9971911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-08-14T21:42:40.9971984Z layer_outputs = decoder_layer( 2025-08-14T21:42:40.9972189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:42:40.9972264Z return super().__call__(*args, **kwargs) 2025-08-14T21:42:40.9972585Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 430, in forward 2025-08-14T21:42:40.9972700Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:42:40.9972908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:42:40.9973006Z return self.act(input) 2025-08-14T21:42:40.9973009Z 2025-08-14T21:42:40.9973083Z cudagraph partition due to non gpu ops 2025-08-14T21:42:40.9973189Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9973374Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9973436Z return mod(**inputs) 2025-08-14T21:42:40.9973732Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1393, in forward 2025-08-14T21:42:40.9973849Z lm_logits = self.lm_head(outputs[0]) + self.final_logits_bias 2025-08-14T21:42:40.9973852Z 2025-08-14T21:42:40.9973957Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:42:40.9974145Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:42:40.9974211Z return mod(**inputs) 2025-08-14T21:42:40.9974512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1398, in forward 2025-08-14T21:42:40.9974669Z masked_lm_loss = loss_fct(lm_logits.view(-1, self.config.vocab_size), labels.view(-1)) 2025-08-14T21:42:40.9974673Z 2025-08-14T21:42:51.7821036Z Compilation time (from dynamo_timed): 24.737058552 2025-08-14T21:42:51.7836302Z pass 2025-08-14T21:42:51.7836695Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:42:51.7837551Z TIMING: _recursive_pre_grad_passes:0.06195 _recursive_joint_graph_passes:0.57849 _recursive_post_grad_passes:0.1182 async_compile.wait:0.78064 code_gen:10.33537 inductor_compile:12.86116 backend_compile:20.78158 gc:0.00023 entire_frame_compile:24.73706 total_wall_time:24.73706 2025-08-14T21:42:51.7838785Z STATS: call_* op count: 652 | FakeTensorMode.__torch_dispatch__:42623 | FakeTensor.__torch_dispatch__:6580 | ProxyTorchDispatchMode.__torch_dispatch__:9376 2025-08-14T21:42:51.7839400Z Dynamo produced 1 graphs covering 652 ops with 0 graph breaks (0 unique) 2025-08-14T21:42:57.7157645Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:42:57.7158671Z from pkg_resources import resource_filename 2025-08-14T21:42:58.8121542Z 2025-08-14T21:43:00.3155304Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:43:00.3159440Z loading model: 0it [00:01, ?it/s] 2025-08-14T21:43:00.3165962Z cpu eval CamemBert 2025-08-14T21:43:00.8306817Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:43:01.0939224Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:43:01.3382108Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:43:11.1861455Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:43:11.1867355Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:43:11.1872200Z return mod(**inputs) 2025-08-14T21:43:11.1877482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:43:11.1879437Z outputs = self.roberta( 2025-08-14T21:43:11.1880533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 886, in forward 2025-08-14T21:43:11.1885532Z embedding_output = self.embeddings( 2025-08-14T21:43:11.1887870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 90, in forward 2025-08-14T21:43:11.1888957Z position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx, past_key_values_length) 2025-08-14T21:43:11.1894862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1590, in create_position_ids_from_input_ids 2025-08-14T21:43:11.1896785Z mask = input_ids.ne(padding_idx).int() 2025-08-14T21:43:11.1897148Z 2025-08-14T21:43:11.1897374Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1897620Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1897968Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1898285Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1898615Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1898954Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1899873Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1900190Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1900457Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1900695Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1900923Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1901151Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1901424Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:43:11.1901842Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:43:11.1902232Z return mod(**inputs) 2025-08-14T21:43:11.1902714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:43:11.1903198Z outputs = self.roberta( 2025-08-14T21:43:11.1903641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 886, in forward 2025-08-14T21:43:11.1904111Z embedding_output = self.embeddings( 2025-08-14T21:43:11.1904560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 90, in forward 2025-08-14T21:43:11.1905129Z position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx, past_key_values_length) 2025-08-14T21:43:11.1905921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1591, in create_position_ids_from_input_ids 2025-08-14T21:43:11.1906560Z incremental_indices = (torch.cumsum(mask, dim=1).type_as(mask) + past_key_values_length) * mask 2025-08-14T21:43:11.1906820Z 2025-08-14T21:43:11.1906953Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:43:11.1907344Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:43:11.1907695Z return mod(**inputs) 2025-08-14T21:43:11.1908124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:43:11.1908571Z outputs = self.roberta( 2025-08-14T21:43:11.1908982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 886, in forward 2025-08-14T21:43:11.1909422Z embedding_output = self.embeddings( 2025-08-14T21:43:11.1909865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 90, in forward 2025-08-14T21:43:11.1910430Z position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx, past_key_values_length) 2025-08-14T21:43:11.1911265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1591, in create_position_ids_from_input_ids 2025-08-14T21:43:11.1911896Z incremental_indices = (torch.cumsum(mask, dim=1).type_as(mask) + past_key_values_length) * mask 2025-08-14T21:43:11.1912161Z 2025-08-14T21:43:11.1912252Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1912545Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1912761Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1912982Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1913210Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1913421Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1913641Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1913893Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:43:11.1914271Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:43:11.1914636Z return mod(**inputs) 2025-08-14T21:43:11.1915058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:43:11.1915511Z outputs = self.roberta( 2025-08-14T21:43:11.1915921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:43:11.1916365Z encoder_outputs = self.encoder( 2025-08-14T21:43:11.1916806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:43:11.1917233Z layer_outputs = layer_module( 2025-08-14T21:43:11.1917618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:43:11.1918013Z return super().__call__(*args, **kwargs) 2025-08-14T21:43:11.1918451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 540, in forward 2025-08-14T21:43:11.1918880Z self_attention_outputs = self.attention( 2025-08-14T21:43:11.1919275Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:43:11.1919654Z return func(*args, **kwargs) 2025-08-14T21:43:11.1920060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 467, in forward 2025-08-14T21:43:11.1920463Z self_outputs = self.self( 2025-08-14T21:43:11.1920851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:43:11.1921260Z return func(*args, **kwargs) 2025-08-14T21:43:11.1921670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 389, in forward 2025-08-14T21:43:11.1922170Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:43:11.1922372Z 2025-08-14T21:43:11.1922459Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1922691Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1922940Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:43:11.1923326Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:43:11.1923678Z return mod(**inputs) 2025-08-14T21:43:11.1924080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:43:11.1924515Z outputs = self.roberta( 2025-08-14T21:43:11.1924924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:43:11.1925360Z encoder_outputs = self.encoder( 2025-08-14T21:43:11.1925772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:43:11.1926182Z layer_outputs = layer_module( 2025-08-14T21:43:11.1926681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:43:11.1927079Z return super().__call__(*args, **kwargs) 2025-08-14T21:43:11.1927526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 570, in forward 2025-08-14T21:43:11.1927993Z layer_output = apply_chunking_to_forward( 2025-08-14T21:43:11.1928413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:43:11.1928816Z return forward_fn(*input_tensors) 2025-08-14T21:43:11.1929261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 578, in feed_forward_chunk 2025-08-14T21:43:11.1929761Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:43:11.1930226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 494, in forward 2025-08-14T21:43:11.1930668Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:43:11.1931060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:43:11.1931416Z return self.act(input) 2025-08-14T21:43:11.1931529Z 2025-08-14T21:43:11.1931615Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1931832Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1932042Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1932250Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1932454Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1932661Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1932872Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1933083Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1933337Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:43:11.1933733Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:43:11.1934080Z return mod(**inputs) 2025-08-14T21:43:11.1934498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:43:11.1934942Z outputs = self.roberta( 2025-08-14T21:43:11.1935356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:43:11.1935786Z encoder_outputs = self.encoder( 2025-08-14T21:43:11.1936217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:43:11.1936651Z layer_outputs = layer_module( 2025-08-14T21:43:11.1937018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:43:11.1937422Z return super().__call__(*args, **kwargs) 2025-08-14T21:43:11.1937860Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 540, in forward 2025-08-14T21:43:11.1938306Z self_attention_outputs = self.attention( 2025-08-14T21:43:11.1938712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:43:11.1939117Z return func(*args, **kwargs) 2025-08-14T21:43:11.1939640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 467, in forward 2025-08-14T21:43:11.1940088Z self_outputs = self.self( 2025-08-14T21:43:11.1940473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:43:11.1940878Z return func(*args, **kwargs) 2025-08-14T21:43:11.1941358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 389, in forward 2025-08-14T21:43:11.1942238Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:43:11.1942448Z 2025-08-14T21:43:11.1942538Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1942922Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1943182Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:43:11.1943572Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:43:11.1943927Z return mod(**inputs) 2025-08-14T21:43:11.1944345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:43:11.1944836Z outputs = self.roberta( 2025-08-14T21:43:11.1945250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:43:11.1945693Z encoder_outputs = self.encoder( 2025-08-14T21:43:11.1946122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:43:11.1946569Z layer_outputs = layer_module( 2025-08-14T21:43:11.1946933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:43:11.1947320Z return super().__call__(*args, **kwargs) 2025-08-14T21:43:11.1947757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 570, in forward 2025-08-14T21:43:11.1948195Z layer_output = apply_chunking_to_forward( 2025-08-14T21:43:11.1948629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:43:11.1949055Z return forward_fn(*input_tensors) 2025-08-14T21:43:11.1949512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 578, in feed_forward_chunk 2025-08-14T21:43:11.1950002Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:43:11.1950457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 494, in forward 2025-08-14T21:43:11.1950909Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:43:11.1951293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:43:11.1951634Z return self.act(input) 2025-08-14T21:43:11.1951754Z 2025-08-14T21:43:11.1951834Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1952050Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1952254Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1952460Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1952668Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1952869Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1953083Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1953366Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1964788Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:43:11.1965329Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:43:11.1965688Z return mod(**inputs) 2025-08-14T21:43:11.1966110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:43:11.1966541Z outputs = self.roberta( 2025-08-14T21:43:11.1966950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:43:11.1967376Z encoder_outputs = self.encoder( 2025-08-14T21:43:11.1967786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:43:11.1968361Z layer_outputs = layer_module( 2025-08-14T21:43:11.1968739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:43:11.1969107Z return super().__call__(*args, **kwargs) 2025-08-14T21:43:11.1969618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 540, in forward 2025-08-14T21:43:11.1970045Z self_attention_outputs = self.attention( 2025-08-14T21:43:11.1970442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:43:11.1970816Z return func(*args, **kwargs) 2025-08-14T21:43:11.1971221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 467, in forward 2025-08-14T21:43:11.1971639Z self_outputs = self.self( 2025-08-14T21:43:11.1972019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:43:11.1972422Z return func(*args, **kwargs) 2025-08-14T21:43:11.1972853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 389, in forward 2025-08-14T21:43:11.1973371Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:43:11.1973573Z 2025-08-14T21:43:11.1973663Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1973903Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1974163Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:43:11.1974559Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:43:11.1974919Z return mod(**inputs) 2025-08-14T21:43:11.1975343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:43:11.1975793Z outputs = self.roberta( 2025-08-14T21:43:11.1976209Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:43:11.1976650Z encoder_outputs = self.encoder( 2025-08-14T21:43:11.1977088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:43:11.1977533Z layer_outputs = layer_module( 2025-08-14T21:43:11.1977907Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:43:11.1978300Z return super().__call__(*args, **kwargs) 2025-08-14T21:43:11.1978735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 570, in forward 2025-08-14T21:43:11.1979173Z layer_output = apply_chunking_to_forward( 2025-08-14T21:43:11.1979724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:43:11.1987374Z return forward_fn(*input_tensors) 2025-08-14T21:43:11.1987962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 578, in feed_forward_chunk 2025-08-14T21:43:11.1988577Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:43:11.1989144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 494, in forward 2025-08-14T21:43:11.1989666Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:43:11.1990105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:43:11.1990449Z return self.act(input) 2025-08-14T21:43:11.1990576Z 2025-08-14T21:43:11.1990674Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1990890Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1991272Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1991492Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1991703Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1991904Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1992182Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1992395Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.1992638Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:43:11.1993006Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:43:11.1993340Z return mod(**inputs) 2025-08-14T21:43:11.1993738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:43:11.1994145Z outputs = self.roberta( 2025-08-14T21:43:11.1994547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:43:11.1994963Z encoder_outputs = self.encoder( 2025-08-14T21:43:11.1995370Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:43:11.1995776Z layer_outputs = layer_module( 2025-08-14T21:43:11.1996133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:43:11.1996502Z return super().__call__(*args, **kwargs) 2025-08-14T21:43:11.1996905Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 540, in forward 2025-08-14T21:43:11.1997323Z self_attention_outputs = self.attention( 2025-08-14T21:43:11.1997716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:43:11.1998094Z return func(*args, **kwargs) 2025-08-14T21:43:11.1998492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 467, in forward 2025-08-14T21:43:11.1998901Z self_outputs = self.self( 2025-08-14T21:43:11.1999268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:43:11.1999648Z return func(*args, **kwargs) 2025-08-14T21:43:11.2000033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 389, in forward 2025-08-14T21:43:11.2000500Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:43:11.2000684Z 2025-08-14T21:43:11.2000773Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2000980Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2001223Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:43:11.2001590Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:43:11.2001922Z return mod(**inputs) 2025-08-14T21:43:11.2002302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:43:11.2002710Z outputs = self.roberta( 2025-08-14T21:43:11.2003099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:43:11.2003501Z encoder_outputs = self.encoder( 2025-08-14T21:43:11.2003903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:43:11.2004311Z layer_outputs = layer_module( 2025-08-14T21:43:11.2004664Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:43:11.2005022Z return super().__call__(*args, **kwargs) 2025-08-14T21:43:11.2005606Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 570, in forward 2025-08-14T21:43:11.2006044Z layer_output = apply_chunking_to_forward( 2025-08-14T21:43:11.2006450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:43:11.2006935Z return forward_fn(*input_tensors) 2025-08-14T21:43:11.2007371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 578, in feed_forward_chunk 2025-08-14T21:43:11.2007858Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:43:11.2008298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 494, in forward 2025-08-14T21:43:11.2008747Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:43:11.2009135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:43:11.2009481Z return self.act(input) 2025-08-14T21:43:11.2009594Z 2025-08-14T21:43:11.2009674Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2009887Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2010102Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2010299Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2010506Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2010711Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2010916Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2011114Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2011353Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:43:11.2011713Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:43:11.2012036Z return mod(**inputs) 2025-08-14T21:43:11.2012424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:43:11.2012832Z outputs = self.roberta( 2025-08-14T21:43:11.2013208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:43:11.2013629Z encoder_outputs = self.encoder( 2025-08-14T21:43:11.2014042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:43:11.2014494Z layer_outputs = layer_module( 2025-08-14T21:43:11.2014855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:43:11.2015235Z return super().__call__(*args, **kwargs) 2025-08-14T21:43:11.2015656Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 540, in forward 2025-08-14T21:43:11.2016084Z self_attention_outputs = self.attention( 2025-08-14T21:43:11.2016478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:43:11.2016864Z return func(*args, **kwargs) 2025-08-14T21:43:11.2017262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 467, in forward 2025-08-14T21:43:11.2017669Z self_outputs = self.self( 2025-08-14T21:43:11.2018037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:43:11.2018471Z return func(*args, **kwargs) 2025-08-14T21:43:11.2018901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 389, in forward 2025-08-14T21:43:11.2019390Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:43:11.2019739Z 2025-08-14T21:43:11.2019830Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2020118Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2020374Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:43:11.2020782Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:43:11.2021193Z return mod(**inputs) 2025-08-14T21:43:11.2021612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:43:11.2022039Z outputs = self.roberta( 2025-08-14T21:43:11.2022455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:43:11.2022871Z encoder_outputs = self.encoder( 2025-08-14T21:43:11.2023284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:43:11.2023688Z layer_outputs = layer_module( 2025-08-14T21:43:11.2024052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:43:11.2024418Z return super().__call__(*args, **kwargs) 2025-08-14T21:43:11.2024822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 570, in forward 2025-08-14T21:43:11.2025254Z layer_output = apply_chunking_to_forward( 2025-08-14T21:43:11.2025669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:43:11.2026068Z return forward_fn(*input_tensors) 2025-08-14T21:43:11.2026498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 578, in feed_forward_chunk 2025-08-14T21:43:11.2026987Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:43:11.2027448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 494, in forward 2025-08-14T21:43:11.2027894Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:43:11.2028272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:43:11.2028623Z return self.act(input) 2025-08-14T21:43:11.2028734Z 2025-08-14T21:43:11.2028824Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2029029Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2029243Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2029451Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2029650Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2029863Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2030070Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2030280Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2030520Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:43:11.2030889Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:43:11.2031220Z return mod(**inputs) 2025-08-14T21:43:11.2031600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:43:11.2032012Z outputs = self.roberta( 2025-08-14T21:43:11.2032400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:43:11.2032805Z encoder_outputs = self.encoder( 2025-08-14T21:43:11.2033202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:43:11.2033613Z layer_outputs = layer_module( 2025-08-14T21:43:11.2033964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:43:11.2034318Z return super().__call__(*args, **kwargs) 2025-08-14T21:43:11.2034783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 540, in forward 2025-08-14T21:43:11.2035201Z self_attention_outputs = self.attention( 2025-08-14T21:43:11.2035678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:43:11.2036040Z return func(*args, **kwargs) 2025-08-14T21:43:11.2036430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 467, in forward 2025-08-14T21:43:11.2036831Z self_outputs = self.self( 2025-08-14T21:43:11.2037189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:43:11.2037544Z return func(*args, **kwargs) 2025-08-14T21:43:11.2037939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 389, in forward 2025-08-14T21:43:11.2038397Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:43:11.2038578Z 2025-08-14T21:43:11.2038661Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2038873Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2039111Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:43:11.2039477Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:43:11.2039797Z return mod(**inputs) 2025-08-14T21:43:11.2040187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:43:11.2040596Z outputs = self.roberta( 2025-08-14T21:43:11.2040982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:43:11.2041394Z encoder_outputs = self.encoder( 2025-08-14T21:43:11.2042000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:43:11.2042443Z layer_outputs = layer_module( 2025-08-14T21:43:11.2042787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:43:11.2043156Z return super().__call__(*args, **kwargs) 2025-08-14T21:43:11.2043573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 570, in forward 2025-08-14T21:43:11.2044004Z layer_output = apply_chunking_to_forward( 2025-08-14T21:43:11.2044412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:43:11.2044824Z return forward_fn(*input_tensors) 2025-08-14T21:43:11.2045275Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 578, in feed_forward_chunk 2025-08-14T21:43:11.2045766Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:43:11.2046225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 494, in forward 2025-08-14T21:43:11.2046679Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:43:11.2047070Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:43:11.2047412Z return self.act(input) 2025-08-14T21:43:11.2047533Z 2025-08-14T21:43:11.2047614Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2047831Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2048037Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2048255Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2048464Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2048669Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2049012Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2049219Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2049450Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:43:11.2049800Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:43:11.2050192Z return mod(**inputs) 2025-08-14T21:43:11.2050579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:43:11.2050978Z outputs = self.roberta( 2025-08-14T21:43:11.2051372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:43:11.2051786Z encoder_outputs = self.encoder( 2025-08-14T21:43:11.2052189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:43:11.2052597Z layer_outputs = layer_module( 2025-08-14T21:43:11.2052951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:43:11.2053316Z return super().__call__(*args, **kwargs) 2025-08-14T21:43:11.2053735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 540, in forward 2025-08-14T21:43:11.2054151Z self_attention_outputs = self.attention( 2025-08-14T21:43:11.2054539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:43:11.2054917Z return func(*args, **kwargs) 2025-08-14T21:43:11.2055305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 467, in forward 2025-08-14T21:43:11.2055711Z self_outputs = self.self( 2025-08-14T21:43:11.2056081Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:43:11.2056455Z return func(*args, **kwargs) 2025-08-14T21:43:11.2056853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 389, in forward 2025-08-14T21:43:11.2057325Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:43:11.2057516Z 2025-08-14T21:43:11.2057605Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2057810Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2058059Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:43:11.2058427Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:43:11.2058788Z return mod(**inputs) 2025-08-14T21:43:11.2059196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:43:11.2059719Z outputs = self.roberta( 2025-08-14T21:43:11.2060149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:43:11.2060585Z encoder_outputs = self.encoder( 2025-08-14T21:43:11.2061019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:43:11.2061457Z layer_outputs = layer_module( 2025-08-14T21:43:11.2061841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:43:11.2062237Z return super().__call__(*args, **kwargs) 2025-08-14T21:43:11.2062687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 570, in forward 2025-08-14T21:43:11.2063140Z layer_output = apply_chunking_to_forward( 2025-08-14T21:43:11.2063622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:43:11.2064028Z return forward_fn(*input_tensors) 2025-08-14T21:43:11.2064473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 578, in feed_forward_chunk 2025-08-14T21:43:11.2064989Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:43:11.2065432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 494, in forward 2025-08-14T21:43:11.2065886Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:43:11.2066275Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:43:11.2066625Z return self.act(input) 2025-08-14T21:43:11.2066739Z 2025-08-14T21:43:11.2066819Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2067033Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2067250Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2067455Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2067664Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2067873Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2068083Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2068284Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2068519Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:43:11.2068892Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:43:11.2069211Z return mod(**inputs) 2025-08-14T21:43:11.2069595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:43:11.2069995Z outputs = self.roberta( 2025-08-14T21:43:11.2070372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:43:11.2070769Z encoder_outputs = self.encoder( 2025-08-14T21:43:11.2071166Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:43:11.2071562Z layer_outputs = layer_module( 2025-08-14T21:43:11.2071902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:43:11.2072261Z return super().__call__(*args, **kwargs) 2025-08-14T21:43:11.2072657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 540, in forward 2025-08-14T21:43:11.2073062Z self_attention_outputs = self.attention( 2025-08-14T21:43:11.2073433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:43:11.2073797Z return func(*args, **kwargs) 2025-08-14T21:43:11.2074198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 467, in forward 2025-08-14T21:43:11.2074584Z self_outputs = self.self( 2025-08-14T21:43:11.2074936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:43:11.2075307Z return func(*args, **kwargs) 2025-08-14T21:43:11.2075691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 389, in forward 2025-08-14T21:43:11.2076137Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:43:11.2076326Z 2025-08-14T21:43:11.2076405Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2076615Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2076849Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:43:11.2077196Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:43:11.2077569Z return mod(**inputs) 2025-08-14T21:43:11.2077957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:43:11.2078349Z outputs = self.roberta( 2025-08-14T21:43:11.2078769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:43:11.2079176Z encoder_outputs = self.encoder( 2025-08-14T21:43:11.2079577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:43:11.2079974Z layer_outputs = layer_module( 2025-08-14T21:43:11.2080323Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:43:11.2080681Z return super().__call__(*args, **kwargs) 2025-08-14T21:43:11.2081081Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 570, in forward 2025-08-14T21:43:11.2081495Z layer_output = apply_chunking_to_forward( 2025-08-14T21:43:11.2081898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:43:11.2082293Z return forward_fn(*input_tensors) 2025-08-14T21:43:11.2082718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 578, in feed_forward_chunk 2025-08-14T21:43:11.2083204Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:43:11.2083639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 494, in forward 2025-08-14T21:43:11.2084070Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:43:11.2084437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:43:11.2084773Z return self.act(input) 2025-08-14T21:43:11.2084881Z 2025-08-14T21:43:11.2084969Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2085169Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2085374Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2085583Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2085780Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2085986Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2086189Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2086390Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2086615Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:43:11.2086969Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:43:11.2087292Z return mod(**inputs) 2025-08-14T21:43:11.2087664Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:43:11.2088061Z outputs = self.roberta( 2025-08-14T21:43:11.2088442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:43:11.2088850Z encoder_outputs = self.encoder( 2025-08-14T21:43:11.2089234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:43:11.2089630Z layer_outputs = layer_module( 2025-08-14T21:43:11.2089974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:43:11.2090321Z return super().__call__(*args, **kwargs) 2025-08-14T21:43:11.2090732Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 540, in forward 2025-08-14T21:43:11.2091136Z self_attention_outputs = self.attention( 2025-08-14T21:43:11.2091584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:43:11.2091939Z return func(*args, **kwargs) 2025-08-14T21:43:11.2092323Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 467, in forward 2025-08-14T21:43:11.2092762Z self_outputs = self.self( 2025-08-14T21:43:11.2093120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:43:11.2093479Z return func(*args, **kwargs) 2025-08-14T21:43:11.2093866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 389, in forward 2025-08-14T21:43:11.2094317Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:43:11.2094498Z 2025-08-14T21:43:11.2094575Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2094790Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2095027Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:43:11.2095387Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:43:11.2095704Z return mod(**inputs) 2025-08-14T21:43:11.2096088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:43:11.2096484Z outputs = self.roberta( 2025-08-14T21:43:11.2096856Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:43:11.2097257Z encoder_outputs = self.encoder( 2025-08-14T21:43:11.2097649Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:43:11.2098045Z layer_outputs = layer_module( 2025-08-14T21:43:11.2098398Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:43:11.2098755Z return super().__call__(*args, **kwargs) 2025-08-14T21:43:11.2099173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 570, in forward 2025-08-14T21:43:11.2099706Z layer_output = apply_chunking_to_forward( 2025-08-14T21:43:11.2100116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:43:11.2100526Z return forward_fn(*input_tensors) 2025-08-14T21:43:11.2100983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 578, in feed_forward_chunk 2025-08-14T21:43:11.2101505Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:43:11.2101991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 494, in forward 2025-08-14T21:43:11.2102479Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:43:11.2102902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:43:11.2103288Z return self.act(input) 2025-08-14T21:43:11.2103412Z 2025-08-14T21:43:11.2103496Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2103714Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2103930Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2104132Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2104347Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2104559Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2104760Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2104968Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2105217Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:43:11.2105637Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:43:11.2105969Z return mod(**inputs) 2025-08-14T21:43:11.2106379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:43:11.2106827Z outputs = self.roberta( 2025-08-14T21:43:11.2107201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:43:11.2107606Z encoder_outputs = self.encoder( 2025-08-14T21:43:11.2108011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:43:11.2108402Z layer_outputs = layer_module( 2025-08-14T21:43:11.2108747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:43:11.2109108Z return super().__call__(*args, **kwargs) 2025-08-14T21:43:11.2109531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 540, in forward 2025-08-14T21:43:11.2109942Z self_attention_outputs = self.attention( 2025-08-14T21:43:11.2110316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:43:11.2110693Z return func(*args, **kwargs) 2025-08-14T21:43:11.2111083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 467, in forward 2025-08-14T21:43:11.2111482Z self_outputs = self.self( 2025-08-14T21:43:11.2111830Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:43:11.2112212Z return func(*args, **kwargs) 2025-08-14T21:43:11.2112604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 389, in forward 2025-08-14T21:43:11.2113061Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:43:11.2113247Z 2025-08-14T21:43:11.2113324Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2113532Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2113767Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:43:11.2114116Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:43:11.2114439Z return mod(**inputs) 2025-08-14T21:43:11.2114816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:43:11.2115223Z outputs = self.roberta( 2025-08-14T21:43:11.2115593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:43:11.2115992Z encoder_outputs = self.encoder( 2025-08-14T21:43:11.2116394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:43:11.2116784Z layer_outputs = layer_module( 2025-08-14T21:43:11.2117125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:43:11.2117485Z return super().__call__(*args, **kwargs) 2025-08-14T21:43:11.2117882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 570, in forward 2025-08-14T21:43:11.2118283Z layer_output = apply_chunking_to_forward( 2025-08-14T21:43:11.2118679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:43:11.2119064Z return forward_fn(*input_tensors) 2025-08-14T21:43:11.2119480Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 578, in feed_forward_chunk 2025-08-14T21:43:11.2120002Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:43:11.2120448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 494, in forward 2025-08-14T21:43:11.2120920Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:43:11.2121282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:43:11.2121609Z return self.act(input) 2025-08-14T21:43:11.2121726Z 2025-08-14T21:43:11.2121802Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2122004Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2122200Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2122398Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2122593Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2122781Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2122984Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2123182Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2123401Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:43:11.2123749Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:43:11.2124067Z return mod(**inputs) 2025-08-14T21:43:11.2124444Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:43:11.2124836Z outputs = self.roberta( 2025-08-14T21:43:11.2125207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:43:11.2125600Z encoder_outputs = self.encoder( 2025-08-14T21:43:11.2125985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:43:11.2126386Z layer_outputs = layer_module( 2025-08-14T21:43:11.2126734Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:43:11.2127096Z return super().__call__(*args, **kwargs) 2025-08-14T21:43:11.2127493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 540, in forward 2025-08-14T21:43:11.2127908Z self_attention_outputs = self.attention( 2025-08-14T21:43:11.2128289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:43:11.2128681Z return func(*args, **kwargs) 2025-08-14T21:43:11.2129065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 467, in forward 2025-08-14T21:43:11.2129464Z self_outputs = self.self( 2025-08-14T21:43:11.2129825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:43:11.2130184Z return func(*args, **kwargs) 2025-08-14T21:43:11.2130576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 389, in forward 2025-08-14T21:43:11.2131033Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:43:11.2131211Z 2025-08-14T21:43:11.2131296Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2131496Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2131728Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:43:11.2132077Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:43:11.2132396Z return mod(**inputs) 2025-08-14T21:43:11.2132781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:43:11.2133189Z outputs = self.roberta( 2025-08-14T21:43:11.2133652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:43:11.2134062Z encoder_outputs = self.encoder( 2025-08-14T21:43:11.2134467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:43:11.2134913Z layer_outputs = layer_module( 2025-08-14T21:43:11.2135259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:43:11.2135622Z return super().__call__(*args, **kwargs) 2025-08-14T21:43:11.2136030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 570, in forward 2025-08-14T21:43:11.2136448Z layer_output = apply_chunking_to_forward( 2025-08-14T21:43:11.2136848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:43:11.2137249Z return forward_fn(*input_tensors) 2025-08-14T21:43:11.2137690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 578, in feed_forward_chunk 2025-08-14T21:43:11.2138180Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:43:11.2138624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 494, in forward 2025-08-14T21:43:11.2139068Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:43:11.2139541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:43:11.2139919Z return self.act(input) 2025-08-14T21:43:11.2140048Z 2025-08-14T21:43:11.2140134Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2140360Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2140586Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2140809Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2141039Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2141254Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2141462Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2141680Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2142146Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:43:11.2142508Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:43:11.2142844Z return mod(**inputs) 2025-08-14T21:43:11.2143237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:43:11.2143646Z outputs = self.roberta( 2025-08-14T21:43:11.2144032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:43:11.2144446Z encoder_outputs = self.encoder( 2025-08-14T21:43:11.2144851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:43:11.2145246Z layer_outputs = layer_module( 2025-08-14T21:43:11.2145598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:43:11.2145963Z return super().__call__(*args, **kwargs) 2025-08-14T21:43:11.2146373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 540, in forward 2025-08-14T21:43:11.2146782Z self_attention_outputs = self.attention( 2025-08-14T21:43:11.2147168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:43:11.2147541Z return func(*args, **kwargs) 2025-08-14T21:43:11.2148065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 467, in forward 2025-08-14T21:43:11.2148540Z self_outputs = self.self( 2025-08-14T21:43:11.2148917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:43:11.2149356Z return func(*args, **kwargs) 2025-08-14T21:43:11.2149786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 389, in forward 2025-08-14T21:43:11.2150257Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:43:11.2150451Z 2025-08-14T21:43:11.2150533Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2150749Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2150982Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:43:11.2151349Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:43:11.2151676Z return mod(**inputs) 2025-08-14T21:43:11.2152060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-08-14T21:43:11.2152465Z outputs = self.roberta( 2025-08-14T21:43:11.2152860Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-08-14T21:43:11.2153258Z encoder_outputs = self.encoder( 2025-08-14T21:43:11.2153644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-08-14T21:43:11.2154036Z layer_outputs = layer_module( 2025-08-14T21:43:11.2154377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:43:11.2154732Z return super().__call__(*args, **kwargs) 2025-08-14T21:43:11.2155128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 570, in forward 2025-08-14T21:43:11.2155538Z layer_output = apply_chunking_to_forward( 2025-08-14T21:43:11.2155948Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:43:11.2156341Z return forward_fn(*input_tensors) 2025-08-14T21:43:11.2156785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 578, in feed_forward_chunk 2025-08-14T21:43:11.2157268Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:43:11.2157719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 494, in forward 2025-08-14T21:43:11.2158155Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:43:11.2158543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:43:11.2158895Z return self.act(input) 2025-08-14T21:43:11.2159007Z 2025-08-14T21:43:11.2159098Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2159307Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2159522Z cudagraph partition due to non gpu ops 2025-08-14T21:43:11.2159766Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:43:11.2160134Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:43:11.2160471Z return mod(**inputs) 2025-08-14T21:43:11.2160859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1059, in forward 2025-08-14T21:43:11.2161374Z masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1)) 2025-08-14T21:43:11.2161620Z 2025-08-14T21:43:20.3407116Z Compilation time (from dynamo_timed): 17.550998653 2025-08-14T21:43:20.3481879Z pass 2025-08-14T21:43:20.3484518Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:43:20.3485361Z TIMING: _recursive_pre_grad_passes:0.03442 _recursive_joint_graph_passes:0.39296 _recursive_post_grad_passes:0.08204 async_compile.wait:0.86728 code_gen:8.86812 inductor_compile:10.5685 backend_compile:14.85038 gc:0.00032 entire_frame_compile:17.551 total_wall_time:17.551 2025-08-14T21:43:20.3487574Z STATS: call_* op count: 297 | FakeTensorMode.__torch_dispatch__:24279 | FakeTensor.__torch_dispatch__:3917 | ProxyTorchDispatchMode.__torch_dispatch__:5350 2025-08-14T21:43:20.3488064Z Dynamo produced 1 graphs covering 297 ops with 0 graph breaks (0 unique) 2025-08-14T21:43:26.0043540Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:43:26.0045070Z from pkg_resources import resource_filename 2025-08-14T21:43:26.5939041Z 2025-08-14T21:43:35.8302843Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:43:35.8303311Z loading model: 0it [00:09, ?it/s] 2025-08-14T21:43:35.8336256Z cpu eval DebertaV2ForMaskedLM 2025-08-14T21:43:35.9659352Z Compilation time (from dynamo_timed): 0 2025-08-14T21:43:35.9662119Z pass_due_to_skip 2025-08-14T21:43:35.9662539Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:43:35.9662953Z TIMING: total_wall_time:0 2025-08-14T21:43:35.9663157Z STATS: call_* op count: 0 2025-08-14T21:43:35.9663504Z Dynamo produced 0 graphs covering 0 ops with 0 graph breaks (0 unique) 2025-08-14T21:43:40.9660299Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:43:40.9661510Z from pkg_resources import resource_filename 2025-08-14T21:43:41.5924587Z 2025-08-14T21:43:49.1566354Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:43:49.1569334Z loading model: 0it [00:07, ?it/s] 2025-08-14T21:43:49.1594452Z cpu eval DebertaV2ForQuestionAnswering 2025-08-14T21:43:52.5105720Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:43:54.0494719Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:43:55.5498607Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:44:15.3191140Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3191633Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3192053Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3192774Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3193409Z return mod(**inputs) 2025-08-14T21:44:15.3194196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3195026Z outputs = self.deberta( 2025-08-14T21:44:15.3195764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3196533Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3197287Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3198085Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3198765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3199905Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3200687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3201470Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3202440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3203216Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3203970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:15.3204928Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:15.3205944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.3206878Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.3207331Z 2025-08-14T21:44:15.3207524Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3208187Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3208782Z return mod(**inputs) 2025-08-14T21:44:15.3209484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3210239Z outputs = self.deberta( 2025-08-14T21:44:15.3210942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3211676Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3212391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3213172Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3213861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3214526Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3215275Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3216054Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3216838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3217593Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3218325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.3219380Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.3220036Z 2025-08-14T21:44:15.3220220Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3220877Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3221482Z return mod(**inputs) 2025-08-14T21:44:15.3222209Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3222981Z outputs = self.deberta( 2025-08-14T21:44:15.3223658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3224379Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3225128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3225843Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3226617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3227290Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3228038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3228897Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3229701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3230463Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3231204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.3232181Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.3232680Z 2025-08-14T21:44:15.3232827Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3233255Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3233904Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3234500Z return mod(**inputs) 2025-08-14T21:44:15.3235209Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3235966Z outputs = self.deberta( 2025-08-14T21:44:15.3236671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3237420Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3238156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3238941Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3239611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3240266Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3241021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3241964Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3242783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3243538Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3244277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:15.3245225Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:15.3246276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.3247220Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.3247522Z 2025-08-14T21:44:15.3247720Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3248359Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3248931Z return mod(**inputs) 2025-08-14T21:44:15.3249622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3250367Z outputs = self.deberta( 2025-08-14T21:44:15.3251063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3251828Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3252767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3253523Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3254194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3254953Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3255688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3256456Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3257244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3257981Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3258738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:15.3259568Z context_layer = torch.bmm( 2025-08-14T21:44:15.3259801Z 2025-08-14T21:44:15.3259986Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3260615Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3261213Z return mod(**inputs) 2025-08-14T21:44:15.3261893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3262626Z outputs = self.deberta( 2025-08-14T21:44:15.3263324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3264065Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3264817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3265586Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3266221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3266846Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3267588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3268369Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3269115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3269870Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3270615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:15.3271590Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:15.3272032Z 2025-08-14T21:44:15.3272157Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3272531Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3272937Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3273590Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3274176Z return mod(**inputs) 2025-08-14T21:44:15.3274860Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3275607Z outputs = self.deberta( 2025-08-14T21:44:15.3276333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3277132Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3277990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3278770Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3279411Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3280177Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3280961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:15.3281823Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:15.3282706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:15.3283570Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:15.3284284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:15.3284910Z return self.act(input) 2025-08-14T21:44:15.3285114Z 2025-08-14T21:44:15.3285250Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3285632Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3286004Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3286435Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3287116Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3287712Z return mod(**inputs) 2025-08-14T21:44:15.3288419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3289189Z outputs = self.deberta( 2025-08-14T21:44:15.3289919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3290701Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3291462Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3292236Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3292916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3293559Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3294304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3295086Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3295884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3296635Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3297354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:15.3298298Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:15.3299322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.3300355Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.3300707Z 2025-08-14T21:44:15.3300873Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3301553Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3302132Z return mod(**inputs) 2025-08-14T21:44:15.3302827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3303563Z outputs = self.deberta( 2025-08-14T21:44:15.3304337Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3305104Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3305823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3306636Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3307309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3307950Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3308713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3309496Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3310250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3310967Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3311714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.3312709Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.3313194Z 2025-08-14T21:44:15.3313385Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3314044Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3314629Z return mod(**inputs) 2025-08-14T21:44:15.3315330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3316059Z outputs = self.deberta( 2025-08-14T21:44:15.3316737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3317466Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3318165Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3318936Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3319591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3320267Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3321003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3321786Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3322552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3323241Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3323877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.3324710Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.3325130Z 2025-08-14T21:44:15.3325240Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3325594Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3326131Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3326623Z return mod(**inputs) 2025-08-14T21:44:15.3327226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3327879Z outputs = self.deberta( 2025-08-14T21:44:15.3328568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3329208Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3329824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3330575Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3331265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3331961Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3332752Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3333575Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3334409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3335211Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3336018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:15.3337010Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:15.3338076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.3338870Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.3339160Z 2025-08-14T21:44:15.3339316Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3340002Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3340526Z return mod(**inputs) 2025-08-14T21:44:15.3341163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3342049Z outputs = self.deberta( 2025-08-14T21:44:15.3342706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3343401Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3344066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3344780Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3345411Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3346025Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3346720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3347455Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3348181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3348925Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3349662Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:15.3350399Z context_layer = torch.bmm( 2025-08-14T21:44:15.3350611Z 2025-08-14T21:44:15.3350784Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3351429Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3351995Z return mod(**inputs) 2025-08-14T21:44:15.3352691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3353425Z outputs = self.deberta( 2025-08-14T21:44:15.3354313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3355050Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3355769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3356622Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3357219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3357818Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3358507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3359228Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3360000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3360750Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3361489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:15.3362444Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:15.3362882Z 2025-08-14T21:44:15.3363006Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3363371Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3363769Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3364407Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3364981Z return mod(**inputs) 2025-08-14T21:44:15.3365672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3366411Z outputs = self.deberta( 2025-08-14T21:44:15.3367095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3367839Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3368567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3369318Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3369975Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3370629Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3371386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:15.3372223Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:15.3373068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:15.3373926Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:15.3374628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:15.3375229Z return self.act(input) 2025-08-14T21:44:15.3375410Z 2025-08-14T21:44:15.3375529Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3375856Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3376171Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3376546Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3377153Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3377700Z return mod(**inputs) 2025-08-14T21:44:15.3378509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3379221Z outputs = self.deberta( 2025-08-14T21:44:15.3380009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3380777Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3381463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3382189Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3382823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3383443Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3384181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3384947Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3385709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3386446Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3387222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:15.3388193Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:15.3389185Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.3390019Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.3390341Z 2025-08-14T21:44:15.3390508Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3391099Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3391621Z return mod(**inputs) 2025-08-14T21:44:15.3392258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3392917Z outputs = self.deberta( 2025-08-14T21:44:15.3393535Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3394267Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3394968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3395730Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3396386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3397033Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3397774Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3398528Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3399221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3399934Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3400628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.3401585Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.3402069Z 2025-08-14T21:44:15.3402246Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3402854Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3403546Z return mod(**inputs) 2025-08-14T21:44:15.3404220Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3405015Z outputs = self.deberta( 2025-08-14T21:44:15.3405801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3406561Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3407279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3408065Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3408764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3409445Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3410252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3411058Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3411885Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3412660Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3413425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.3414447Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.3414967Z 2025-08-14T21:44:15.3415120Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3415542Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3416218Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3416844Z return mod(**inputs) 2025-08-14T21:44:15.3417548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3418310Z outputs = self.deberta( 2025-08-14T21:44:15.3419066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3419985Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3420761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3421573Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3422304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3422999Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3423812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3424662Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3425488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3426277Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3427073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:15.3428109Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:15.3429224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.3430215Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.3430582Z 2025-08-14T21:44:15.3430941Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3431663Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3432298Z return mod(**inputs) 2025-08-14T21:44:15.3433142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3433947Z outputs = self.deberta( 2025-08-14T21:44:15.3434722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3435490Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3436215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3436974Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3437647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3438291Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3439030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3439798Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3440571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3441326Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3442372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:15.3443187Z context_layer = torch.bmm( 2025-08-14T21:44:15.3443408Z 2025-08-14T21:44:15.3443606Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3444303Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3444889Z return mod(**inputs) 2025-08-14T21:44:15.3445593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3446323Z outputs = self.deberta( 2025-08-14T21:44:15.3447008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3447752Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3448472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3449209Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3449859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3450501Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3451248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3452013Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3452770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3453517Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3454250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:15.3455200Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:15.3455656Z 2025-08-14T21:44:15.3455797Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3456166Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3456568Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3457442Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3458120Z return mod(**inputs) 2025-08-14T21:44:15.3458849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3459851Z outputs = self.deberta( 2025-08-14T21:44:15.3460621Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3461396Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3462154Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3463067Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3463766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3464379Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3465120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:15.3465990Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:15.3466852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:15.3467661Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:15.3468374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:15.3469014Z return self.act(input) 2025-08-14T21:44:15.3469218Z 2025-08-14T21:44:15.3469375Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3469755Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3470134Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3470573Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3471241Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3471868Z return mod(**inputs) 2025-08-14T21:44:15.3472612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3473368Z outputs = self.deberta( 2025-08-14T21:44:15.3474048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3474785Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3475517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3476268Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3476940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3477594Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3478338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3479114Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3479877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3480616Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3481337Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:15.3482262Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:15.3483457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.3484389Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.3484725Z 2025-08-14T21:44:15.3484910Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3485605Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3486172Z return mod(**inputs) 2025-08-14T21:44:15.3486872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3487606Z outputs = self.deberta( 2025-08-14T21:44:15.3488311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3489046Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3489787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3490532Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3491196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3491852Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3492567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3493325Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3494093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3494821Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3495553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.3496551Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.3497048Z 2025-08-14T21:44:15.3497239Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3497937Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3498551Z return mod(**inputs) 2025-08-14T21:44:15.3499293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3500191Z outputs = self.deberta( 2025-08-14T21:44:15.3500971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3501764Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3502569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3503404Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3504122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3504845Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3505654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3506503Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3507328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3508137Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3508928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.3510098Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.3510644Z 2025-08-14T21:44:15.3510783Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3511244Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3512047Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3512622Z return mod(**inputs) 2025-08-14T21:44:15.3513318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3514056Z outputs = self.deberta( 2025-08-14T21:44:15.3514750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3515488Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3516214Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3516966Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3517622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3518277Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3519017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3519810Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3520622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3521406Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3522147Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:15.3523106Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:15.3524106Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.3525013Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.3525349Z 2025-08-14T21:44:15.3525538Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3526191Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3526748Z return mod(**inputs) 2025-08-14T21:44:15.3527447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3528193Z outputs = self.deberta( 2025-08-14T21:44:15.3528888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3529592Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3530299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3531041Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3531675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3532323Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3533064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3533823Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3534587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3535318Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3536147Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:15.3536871Z context_layer = torch.bmm( 2025-08-14T21:44:15.3537076Z 2025-08-14T21:44:15.3537309Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3537960Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3538529Z return mod(**inputs) 2025-08-14T21:44:15.3539208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3540053Z outputs = self.deberta( 2025-08-14T21:44:15.3540754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3541494Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3542515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3543379Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3544099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3544793Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3545598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3546411Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3547238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3548011Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3548779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:15.3549801Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:15.3550261Z 2025-08-14T21:44:15.3550406Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3550796Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3551228Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3551902Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3552509Z return mod(**inputs) 2025-08-14T21:44:15.3553251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3554027Z outputs = self.deberta( 2025-08-14T21:44:15.3554763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3555551Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3556301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3557139Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3557872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3558570Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3559351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:15.3560235Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:15.3561086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:15.3561886Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:15.3562811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:15.3563461Z return self.act(input) 2025-08-14T21:44:15.3563679Z 2025-08-14T21:44:15.3563822Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3564352Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3564754Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3565180Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3565892Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3566517Z return mod(**inputs) 2025-08-14T21:44:15.3567258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3567987Z outputs = self.deberta( 2025-08-14T21:44:15.3568725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3569495Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3570208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3570985Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3571660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3572342Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3573122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3573938Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3574762Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3575578Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3576393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:15.3577409Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:15.3578542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.3579644Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.3580024Z 2025-08-14T21:44:15.3580228Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3580941Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3581551Z return mod(**inputs) 2025-08-14T21:44:15.3582277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3583063Z outputs = self.deberta( 2025-08-14T21:44:15.3583814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3584581Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3585351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3586153Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3586857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3587560Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3588375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3589224Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3590178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3590971Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3591754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.3592876Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.3593396Z 2025-08-14T21:44:15.3594232Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3594897Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3595496Z return mod(**inputs) 2025-08-14T21:44:15.3596231Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3597026Z outputs = self.deberta( 2025-08-14T21:44:15.3597808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3598587Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3599357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3600174Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3600873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3601567Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3602341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3603156Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3603976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3604764Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3605556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.3606645Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.3607181Z 2025-08-14T21:44:15.3607326Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3607791Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3608477Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3609116Z return mod(**inputs) 2025-08-14T21:44:15.3609900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3610682Z outputs = self.deberta( 2025-08-14T21:44:15.3611434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3612232Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3613027Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3613849Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3614570Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3615273Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3616075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3616906Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3617881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3625048Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3625829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:15.3626953Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:15.3628048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.3629009Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.3629361Z 2025-08-14T21:44:15.3629555Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3630249Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3630866Z return mod(**inputs) 2025-08-14T21:44:15.3631613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3632451Z outputs = self.deberta( 2025-08-14T21:44:15.3633200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3634010Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3634828Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3635645Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3636334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3637039Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3637851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3638681Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3639528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3640320Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3641119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:15.3642349Z context_layer = torch.bmm( 2025-08-14T21:44:15.3642585Z 2025-08-14T21:44:15.3642786Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3643516Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3644148Z return mod(**inputs) 2025-08-14T21:44:15.3644942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3645768Z outputs = self.deberta( 2025-08-14T21:44:15.3646536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3647327Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3648124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3648956Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3649677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3650392Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3651212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3652056Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3653103Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3654082Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3654906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:15.3655999Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:15.3656480Z 2025-08-14T21:44:15.3656630Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3657043Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3657489Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3658185Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3658818Z return mod(**inputs) 2025-08-14T21:44:15.3659708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3660523Z outputs = self.deberta( 2025-08-14T21:44:15.3661265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3662076Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3662870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3663705Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3664401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3665110Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3665909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:15.3666807Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:15.3667680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:15.3668534Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:15.3669250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:15.3669884Z return self.act(input) 2025-08-14T21:44:15.3670086Z 2025-08-14T21:44:15.3670224Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3670605Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3670965Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3671392Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3672076Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3672693Z return mod(**inputs) 2025-08-14T21:44:15.3673410Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3674200Z outputs = self.deberta( 2025-08-14T21:44:15.3674968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3675759Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3676560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3677380Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3678074Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3678752Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3679679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3680538Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3681478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3682318Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3683130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:15.3684167Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:15.3685275Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.3686271Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.3686649Z 2025-08-14T21:44:15.3686852Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3687564Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3688186Z return mod(**inputs) 2025-08-14T21:44:15.3688963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3689809Z outputs = self.deberta( 2025-08-14T21:44:15.3690591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3691393Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3692175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3693003Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3693741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3694457Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3695270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3696138Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3696979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3697791Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3698598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.3699801Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.3700336Z 2025-08-14T21:44:15.3700551Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3701260Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3701895Z return mod(**inputs) 2025-08-14T21:44:15.3702662Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3703471Z outputs = self.deberta( 2025-08-14T21:44:15.3704235Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3705031Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3705819Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3706637Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3707358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3708213Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3709041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3709948Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3710808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3711599Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3712374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.3713399Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.3713931Z 2025-08-14T21:44:15.3714077Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3714517Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3715206Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3715830Z return mod(**inputs) 2025-08-14T21:44:15.3716588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3717410Z outputs = self.deberta( 2025-08-14T21:44:15.3718161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3718963Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3719762Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3720597Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3721315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3722035Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3722868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3723713Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3724551Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3725356Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3726164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:15.3727226Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:15.3728336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.3729328Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.3729691Z 2025-08-14T21:44:15.3729900Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3730596Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3731217Z return mod(**inputs) 2025-08-14T21:44:15.3731980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3732781Z outputs = self.deberta( 2025-08-14T21:44:15.3733534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3734344Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3735153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3736094Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3736813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3737589Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3738437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3739274Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3740251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3741078Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3742193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:15.3743005Z context_layer = torch.bmm( 2025-08-14T21:44:15.3743241Z 2025-08-14T21:44:15.3743451Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3744176Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3744822Z return mod(**inputs) 2025-08-14T21:44:15.3745578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3746377Z outputs = self.deberta( 2025-08-14T21:44:15.3747161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3747950Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3748756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3749596Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3750317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3751019Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3751846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3752705Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3753548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3754382Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3755193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:15.3756244Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:15.3756730Z 2025-08-14T21:44:15.3756873Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3757288Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3757733Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3758457Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3759095Z return mod(**inputs) 2025-08-14T21:44:15.3759863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3760684Z outputs = self.deberta( 2025-08-14T21:44:15.3761437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3762240Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3763042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3763878Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3764838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3765640Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3766469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:15.3767437Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:15.3768346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:15.3769227Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:15.3769993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:15.3770651Z return self.act(input) 2025-08-14T21:44:15.3770868Z 2025-08-14T21:44:15.3771012Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3771410Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3771803Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3772241Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3772954Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3773583Z return mod(**inputs) 2025-08-14T21:44:15.3774324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3775134Z outputs = self.deberta( 2025-08-14T21:44:15.3775891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3776680Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3777471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3778312Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3779035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3779903Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3780739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3781590Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3782429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3783236Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3784039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:15.3785053Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:15.3786173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.3787170Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.3787557Z 2025-08-14T21:44:15.3787753Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3788449Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3789067Z return mod(**inputs) 2025-08-14T21:44:15.3789794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3790590Z outputs = self.deberta( 2025-08-14T21:44:15.3791336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3792273Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3793068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3793968Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3794711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3795412Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3796232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3797088Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3797931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3798762Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3799588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.3800659Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.3801205Z 2025-08-14T21:44:15.3801405Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3802123Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3802756Z return mod(**inputs) 2025-08-14T21:44:15.3803524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3804322Z outputs = self.deberta( 2025-08-14T21:44:15.3805096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3805907Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3806700Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3807533Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3808255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3808956Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3809766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3810619Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3811471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3812304Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3813116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.3814385Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.3815119Z 2025-08-14T21:44:15.3815314Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3815893Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3816825Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3817672Z return mod(**inputs) 2025-08-14T21:44:15.3818684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3819716Z outputs = self.deberta( 2025-08-14T21:44:15.3820367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3821077Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3821917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3822735Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3823399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3824111Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3824902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3825705Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3826577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3827526Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3828238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:15.3829273Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:15.3830382Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.3831354Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.3831698Z 2025-08-14T21:44:15.3831889Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3832585Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3833214Z return mod(**inputs) 2025-08-14T21:44:15.3833945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3834711Z outputs = self.deberta( 2025-08-14T21:44:15.3835486Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3836293Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3837031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3837900Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3838629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3839342Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3840153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3841008Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3842238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3843068Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3843869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:15.3844676Z context_layer = torch.bmm( 2025-08-14T21:44:15.3844894Z 2025-08-14T21:44:15.3845105Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3845819Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3846456Z return mod(**inputs) 2025-08-14T21:44:15.3847214Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3848007Z outputs = self.deberta( 2025-08-14T21:44:15.3848765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3849802Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3850602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3851511Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3852288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3853002Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3853819Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3854669Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3855514Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3856313Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3857129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:15.3858173Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:15.3858675Z 2025-08-14T21:44:15.3858822Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3859228Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3859786Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3860509Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3861140Z return mod(**inputs) 2025-08-14T21:44:15.3861928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3862735Z outputs = self.deberta( 2025-08-14T21:44:15.3863502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3864327Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3865126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3865958Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3866687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3867409Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3868217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:15.3869116Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:15.3870006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:15.3870912Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:15.3871664Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:15.3872345Z return self.act(input) 2025-08-14T21:44:15.3872554Z 2025-08-14T21:44:15.3872709Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3873090Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3873479Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3873911Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3874615Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3875255Z return mod(**inputs) 2025-08-14T21:44:15.3876029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3876844Z outputs = self.deberta( 2025-08-14T21:44:15.3877714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3878574Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3879377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3880238Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3880962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3881668Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3882472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3883323Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3884143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3884952Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3885770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:15.3886805Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:15.3887916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.3888926Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.3889292Z 2025-08-14T21:44:15.3889499Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3890200Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3890835Z return mod(**inputs) 2025-08-14T21:44:15.3891617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3892439Z outputs = self.deberta( 2025-08-14T21:44:15.3893197Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3894016Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3894816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3895636Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3896346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3897060Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3897872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3898709Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3899686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3900517Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3901335Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.3902415Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.3902958Z 2025-08-14T21:44:15.3903160Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3903884Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3904538Z return mod(**inputs) 2025-08-14T21:44:15.3905440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3906268Z outputs = self.deberta( 2025-08-14T21:44:15.3907084Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3907909Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3908696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3909537Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3910254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3910942Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3911743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3912595Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3913415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3914225Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3915041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.3916126Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.3916656Z 2025-08-14T21:44:15.3916801Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3917263Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3917982Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3918619Z return mod(**inputs) 2025-08-14T21:44:15.3919379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3920193Z outputs = self.deberta( 2025-08-14T21:44:15.3920972Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3921772Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3922564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3923394Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3924122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3924808Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3925622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3926452Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3927289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3928102Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3928915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:15.3929959Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:15.3931067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.3932062Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.3932430Z 2025-08-14T21:44:15.3932627Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3933429Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3934064Z return mod(**inputs) 2025-08-14T21:44:15.3934881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3935718Z outputs = self.deberta( 2025-08-14T21:44:15.3936487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3937282Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3938081Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3938920Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3939740Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3940451Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3941274Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3942395Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3943248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3944078Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3944895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:15.3945687Z context_layer = torch.bmm( 2025-08-14T21:44:15.3945909Z 2025-08-14T21:44:15.3946106Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3946821Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3947466Z return mod(**inputs) 2025-08-14T21:44:15.3948228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3949025Z outputs = self.deberta( 2025-08-14T21:44:15.3949810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3950630Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3951422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3952262Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3952987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3953692Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3954517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3955370Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3956230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3957048Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3957872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:15.3958904Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:15.3959382Z 2025-08-14T21:44:15.3959532Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3959918Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3960363Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3961086Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3961969Z return mod(**inputs) 2025-08-14T21:44:15.3962756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3963650Z outputs = self.deberta( 2025-08-14T21:44:15.3964485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3965290Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3966086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3966920Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3967656Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3968350Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3968908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:15.3969142Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:15.3969690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:15.3969906Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:15.3970333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:15.3970468Z return self.act(input) 2025-08-14T21:44:15.3970477Z 2025-08-14T21:44:15.3970622Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3970766Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3970913Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3971113Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3971521Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3971649Z return mod(**inputs) 2025-08-14T21:44:15.3972205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3972335Z outputs = self.deberta( 2025-08-14T21:44:15.3972885Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3973012Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3973578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3973735Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3974192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3974335Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3974890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3975077Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3975631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3975773Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3976340Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:15.3976707Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:15.3977332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.3977654Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.3977664Z 2025-08-14T21:44:15.3977865Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3978316Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3978482Z return mod(**inputs) 2025-08-14T21:44:15.3979055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3979176Z outputs = self.deberta( 2025-08-14T21:44:15.3979841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3979990Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3980544Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3980696Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3981163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3981305Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3981892Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3982066Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3982618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3982764Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3983322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.3983752Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.3983759Z 2025-08-14T21:44:15.3983964Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3984363Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3984486Z return mod(**inputs) 2025-08-14T21:44:15.3985047Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3985175Z outputs = self.deberta( 2025-08-14T21:44:15.3985722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3985849Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3986413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3986564Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3987014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3987168Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3987731Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3987911Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3988456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3988592Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3989152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.3989563Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.3989570Z 2025-08-14T21:44:15.3989793Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.3989987Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3990426Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3990543Z return mod(**inputs) 2025-08-14T21:44:15.3991135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3991254Z outputs = self.deberta( 2025-08-14T21:44:15.3991820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3991949Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3992519Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3992677Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3993125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3993277Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.3993823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.3994006Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.3994565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.3994704Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.3995251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:15.3995622Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:15.3996256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.3996518Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.3996526Z 2025-08-14T21:44:15.3996730Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.3997140Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.3997255Z return mod(**inputs) 2025-08-14T21:44:15.3997806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.3997938Z outputs = self.deberta( 2025-08-14T21:44:15.3998478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.3998610Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.3999178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.3999336Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.3999798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.3999942Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4000490Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4000670Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4001207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4001349Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4001877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:15.4002081Z context_layer = torch.bmm( 2025-08-14T21:44:15.4002090Z 2025-08-14T21:44:15.4002336Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4002729Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4002898Z return mod(**inputs) 2025-08-14T21:44:15.4003469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4003584Z outputs = self.deberta( 2025-08-14T21:44:15.4004152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4004283Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4004832Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4005002Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4005446Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4005599Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4006160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4006330Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4006895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4007029Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4007601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:15.4007974Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:15.4007987Z 2025-08-14T21:44:15.4008134Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4008288Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4008484Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4008888Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4009011Z return mod(**inputs) 2025-08-14T21:44:15.4009568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4009691Z outputs = self.deberta( 2025-08-14T21:44:15.4010255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4010376Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4010947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4011103Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4011568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4011709Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4012265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:15.4012503Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:15.4013063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:15.4013266Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:15.4013707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:15.4013825Z return self.act(input) 2025-08-14T21:44:15.4013937Z 2025-08-14T21:44:15.4014090Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4014268Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4014405Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4014629Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4015019Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4015133Z return mod(**inputs) 2025-08-14T21:44:15.4015697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4015819Z outputs = self.deberta( 2025-08-14T21:44:15.4016372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4016501Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4017040Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4017207Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4017640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4017777Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4018330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4018501Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4019052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4019189Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4019838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:15.4020241Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:15.4020873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.4021136Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.4021144Z 2025-08-14T21:44:15.4021344Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4021742Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4021866Z return mod(**inputs) 2025-08-14T21:44:15.4022432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4022556Z outputs = self.deberta( 2025-08-14T21:44:15.4023112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4023242Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4023803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4023961Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4024414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4024563Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4025117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4025296Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4025845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4026057Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4026622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.4027136Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.4027174Z 2025-08-14T21:44:15.4027380Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4027779Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4027898Z return mod(**inputs) 2025-08-14T21:44:15.4028464Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4028583Z outputs = self.deberta( 2025-08-14T21:44:15.4029137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4029281Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4029840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4030010Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4030447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4030582Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4031140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4031301Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4031844Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4031978Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4032529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.4032942Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.4032953Z 2025-08-14T21:44:15.4033096Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4033299Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4033689Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4033803Z return mod(**inputs) 2025-08-14T21:44:15.4034380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4034499Z outputs = self.deberta( 2025-08-14T21:44:15.4035056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4035196Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4035738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4035905Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4036347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4036487Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4037041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4037202Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4037749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4037898Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4038505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:15.4038927Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:15.4039584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.4039839Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.4039856Z 2025-08-14T21:44:15.4040060Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4040456Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4040580Z return mod(**inputs) 2025-08-14T21:44:15.4041152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4041273Z outputs = self.deberta( 2025-08-14T21:44:15.4042110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4042258Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4042821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4042976Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4043420Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4043561Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4044114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4044284Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4044854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4044992Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4045553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:15.4045683Z context_layer = torch.bmm( 2025-08-14T21:44:15.4045690Z 2025-08-14T21:44:15.4045881Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4046286Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4046399Z return mod(**inputs) 2025-08-14T21:44:15.4046956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4047077Z outputs = self.deberta( 2025-08-14T21:44:15.4047641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4047781Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4048331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4048489Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4048934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4049067Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4049622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4049787Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4050518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4050674Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4051282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:15.4051705Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:15.4051714Z 2025-08-14T21:44:15.4051856Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4052001Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4052207Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4052609Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4052726Z return mod(**inputs) 2025-08-14T21:44:15.4053296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4053417Z outputs = self.deberta( 2025-08-14T21:44:15.4053982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4054115Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4054669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4054832Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4055277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4055415Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4055971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:15.4056203Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:15.4056763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:15.4056969Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:15.4057395Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:15.4057531Z return self.act(input) 2025-08-14T21:44:15.4057539Z 2025-08-14T21:44:15.4057682Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4057829Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4057964Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4058153Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4058550Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4058657Z return mod(**inputs) 2025-08-14T21:44:15.4059221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4059345Z outputs = self.deberta( 2025-08-14T21:44:15.4059990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4060140Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4060700Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4060854Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4061314Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4061455Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4061991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4062165Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4062769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4062947Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4063501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:15.4063896Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:15.4064545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.4064795Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.4064803Z 2025-08-14T21:44:15.4065008Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4065403Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4065520Z return mod(**inputs) 2025-08-14T21:44:15.4066101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4066226Z outputs = self.deberta( 2025-08-14T21:44:15.4066779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4066913Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4067460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4067623Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4068067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4068211Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4068775Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4068949Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4069509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4069652Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4070205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.4070626Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.4070633Z 2025-08-14T21:44:15.4070829Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4071226Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4071344Z return mod(**inputs) 2025-08-14T21:44:15.4071907Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4072043Z outputs = self.deberta( 2025-08-14T21:44:15.4072607Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4072739Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4073312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4073468Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4073923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4074067Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4074706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4074886Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4075473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4075646Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4076181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.4076597Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.4076604Z 2025-08-14T21:44:15.4076762Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4076962Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4077361Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4077475Z return mod(**inputs) 2025-08-14T21:44:15.4078028Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4078157Z outputs = self.deberta( 2025-08-14T21:44:15.4078710Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4078838Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4079401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4079554Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4079995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4080130Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4080687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4080862Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4081419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4081553Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4082111Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:15.4082487Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:15.4083144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.4083392Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.4083399Z 2025-08-14T21:44:15.4083601Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4084009Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4084123Z return mod(**inputs) 2025-08-14T21:44:15.4084694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4084813Z outputs = self.deberta( 2025-08-14T21:44:15.4085368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4085507Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4086064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4086227Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4086738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4086879Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4087463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4087653Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4088211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4088358Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4088905Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:15.4089034Z context_layer = torch.bmm( 2025-08-14T21:44:15.4089041Z 2025-08-14T21:44:15.4089229Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4089631Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4089744Z return mod(**inputs) 2025-08-14T21:44:15.4090310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4090445Z outputs = self.deberta( 2025-08-14T21:44:15.4090992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4091122Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4091681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4091832Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4092282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4092424Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4092967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4093148Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4093689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4093830Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4094385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:15.4094754Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:15.4094763Z 2025-08-14T21:44:15.4094913Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4095056Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4095250Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4095657Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4095772Z return mod(**inputs) 2025-08-14T21:44:15.4096339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4096474Z outputs = self.deberta( 2025-08-14T21:44:15.4097036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4097175Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4097729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4097884Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4098326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4098528Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4099093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:15.4099359Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:15.4100061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:15.4100280Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:15.4100705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:15.4100827Z return self.act(input) 2025-08-14T21:44:15.4100834Z 2025-08-14T21:44:15.4100986Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4101138Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4101282Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4101478Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4101862Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4101982Z return mod(**inputs) 2025-08-14T21:44:15.4102524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4102636Z outputs = self.deberta( 2025-08-14T21:44:15.4103178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4103299Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4103826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4103973Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4104406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4104553Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4105124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4105296Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4105827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4105958Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4106495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:15.4106850Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:15.4107472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.4107727Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.4107737Z 2025-08-14T21:44:15.4107930Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4108338Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4108454Z return mod(**inputs) 2025-08-14T21:44:15.4109014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4109145Z outputs = self.deberta( 2025-08-14T21:44:15.4109697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4109831Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4110425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4110584Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4111065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4111235Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4111793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4111974Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4112512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4112657Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4113202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.4113615Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.4113625Z 2025-08-14T21:44:15.4113831Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4114226Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4114353Z return mod(**inputs) 2025-08-14T21:44:15.4114913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4115035Z outputs = self.deberta( 2025-08-14T21:44:15.4115596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4115733Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4116280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4116446Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4116895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4117047Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4117595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4117760Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4118309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4118440Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4118980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.4119380Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.4119387Z 2025-08-14T21:44:15.4119527Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4119728Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4120120Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4120240Z return mod(**inputs) 2025-08-14T21:44:15.4120807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4120928Z outputs = self.deberta( 2025-08-14T21:44:15.4121474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4121604Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4122161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4122410Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4122867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4123043Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4123614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4123784Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4124340Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4124480Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4125024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:15.4125420Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:15.4126052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.4126319Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.4126330Z 2025-08-14T21:44:15.4126522Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4126918Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4127040Z return mod(**inputs) 2025-08-14T21:44:15.4127612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4127745Z outputs = self.deberta( 2025-08-14T21:44:15.4128295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4128434Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4128991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4129148Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4129611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4129751Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4130296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4130475Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4131017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4131151Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4131732Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:15.4131859Z context_layer = torch.bmm( 2025-08-14T21:44:15.4131866Z 2025-08-14T21:44:15.4132072Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4132462Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4132574Z return mod(**inputs) 2025-08-14T21:44:15.4133141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4133259Z outputs = self.deberta( 2025-08-14T21:44:15.4133808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4133946Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4134564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4134759Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4135197Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4135362Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4135925Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4136093Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4136660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4136795Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4137346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:15.4137733Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:15.4137745Z 2025-08-14T21:44:15.4137893Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4138044Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4138240Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4138640Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4138768Z return mod(**inputs) 2025-08-14T21:44:15.4139339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4139575Z outputs = self.deberta( 2025-08-14T21:44:15.4140162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4140295Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4140873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4141023Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4141469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4141610Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4142513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:15.4142755Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:15.4143308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:15.4143515Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:15.4143956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:15.4144070Z return self.act(input) 2025-08-14T21:44:15.4144079Z 2025-08-14T21:44:15.4144219Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4144368Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4144503Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4144695Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4145099Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4145212Z return mod(**inputs) 2025-08-14T21:44:15.4145768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4145885Z outputs = self.deberta( 2025-08-14T21:44:15.4146425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4146751Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4147295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4147511Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4148005Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4148147Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4148705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4148876Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4149412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4149549Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4150098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:15.4150481Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:15.4151114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.4151365Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.4151372Z 2025-08-14T21:44:15.4151579Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4151969Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4152093Z return mod(**inputs) 2025-08-14T21:44:15.4152647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4152774Z outputs = self.deberta( 2025-08-14T21:44:15.4153345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4153476Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4154034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4154187Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4154624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4154770Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4155304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4155472Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4156047Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4156188Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4156744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.4157152Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.4157159Z 2025-08-14T21:44:15.4157356Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4157771Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4157885Z return mod(**inputs) 2025-08-14T21:44:15.4158451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4158572Z outputs = self.deberta( 2025-08-14T21:44:15.4159182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4159351Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4159909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4160113Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4160575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4160719Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4161288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4161459Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4162014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4162161Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4162716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.4163132Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.4163140Z 2025-08-14T21:44:15.4163288Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4163483Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4163892Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4164006Z return mod(**inputs) 2025-08-14T21:44:15.4164550Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4164676Z outputs = self.deberta( 2025-08-14T21:44:15.4165207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4165346Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4165877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4166030Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4166473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4166605Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4167138Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4167301Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4167833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4167974Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4168492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:15.4168854Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:15.4169477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.4169727Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.4169734Z 2025-08-14T21:44:15.4169935Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4170333Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4170448Z return mod(**inputs) 2025-08-14T21:44:15.4171092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4171246Z outputs = self.deberta( 2025-08-14T21:44:15.4171825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4171980Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4172516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4172681Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4173116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4173259Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4173826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4173999Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4174554Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4174691Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4175243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:15.4175380Z context_layer = torch.bmm( 2025-08-14T21:44:15.4175387Z 2025-08-14T21:44:15.4175578Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4175983Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4176099Z return mod(**inputs) 2025-08-14T21:44:15.4176651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4176788Z outputs = self.deberta( 2025-08-14T21:44:15.4177328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4177453Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4178013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4178165Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4178616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4178758Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4179304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4179596Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4180160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4180307Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4180841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:15.4181206Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:15.4181215Z 2025-08-14T21:44:15.4181368Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4181508Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4181700Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4182102Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4182216Z return mod(**inputs) 2025-08-14T21:44:15.4182862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4182980Z outputs = self.deberta( 2025-08-14T21:44:15.4183566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4183735Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4184291Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4184457Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4184912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4185052Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4185611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:15.4185840Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:15.4186400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:15.4186619Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:15.4187046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:15.4187173Z return self.act(input) 2025-08-14T21:44:15.4187180Z 2025-08-14T21:44:15.4187325Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4187467Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4187616Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4187810Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4188200Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4188317Z return mod(**inputs) 2025-08-14T21:44:15.4188892Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4189025Z outputs = self.deberta( 2025-08-14T21:44:15.4189586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4189719Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4190277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4190429Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4190869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4190999Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4191557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4191734Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4192290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4192430Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4192994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:15.4193350Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:15.4193996Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.4194239Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.4194247Z 2025-08-14T21:44:15.4194448Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4194902Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4195037Z return mod(**inputs) 2025-08-14T21:44:15.4195596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4195754Z outputs = self.deberta( 2025-08-14T21:44:15.4196295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4196429Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4196963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4197112Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4197542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4197683Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4198235Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4198402Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4198956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4199102Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4199650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.4200069Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.4200077Z 2025-08-14T21:44:15.4200276Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4200667Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4200791Z return mod(**inputs) 2025-08-14T21:44:15.4201357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4201490Z outputs = self.deberta( 2025-08-14T21:44:15.4202053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4202179Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4202720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4202871Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4203316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4203464Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4204021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4204203Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4204759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4204896Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4205466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.4205876Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.4205883Z 2025-08-14T21:44:15.4206035Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4206232Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4206749Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4206915Z return mod(**inputs) 2025-08-14T21:44:15.4207481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4207630Z outputs = self.deberta( 2025-08-14T21:44:15.4208189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4208318Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4208872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4209029Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4209478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4209637Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4210191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4210373Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4210912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4211052Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4211606Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:15.4211980Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:15.4212612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.4212878Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.4212886Z 2025-08-14T21:44:15.4213083Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4213484Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4213601Z return mod(**inputs) 2025-08-14T21:44:15.4214162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4214294Z outputs = self.deberta( 2025-08-14T21:44:15.4214850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4214988Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4215535Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4215695Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4216145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4216290Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4216848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4217022Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4217569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4217718Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4218269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:15.4218395Z context_layer = torch.bmm( 2025-08-14T21:44:15.4218402Z 2025-08-14T21:44:15.4218681Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4219082Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4219257Z return mod(**inputs) 2025-08-14T21:44:15.4219933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4220088Z outputs = self.deberta( 2025-08-14T21:44:15.4220641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4220766Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4221327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4221489Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4221932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4222086Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4222642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4222809Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4223367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4223505Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4224066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:15.4224429Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:15.4224437Z 2025-08-14T21:44:15.4224585Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4224741Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4224933Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4225340Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4225466Z return mod(**inputs) 2025-08-14T21:44:15.4226037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4226171Z outputs = self.deberta( 2025-08-14T21:44:15.4226724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4226853Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4227427Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4227582Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4228044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4228188Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4228741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:15.4229008Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:15.4229573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:15.4229778Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:15.4230225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:15.4230343Z return self.act(input) 2025-08-14T21:44:15.4230349Z 2025-08-14T21:44:15.4230487Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4230683Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4230819Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4231040Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4231421Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4231557Z return mod(**inputs) 2025-08-14T21:44:15.4232118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4232235Z outputs = self.deberta( 2025-08-14T21:44:15.4232781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4232902Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4233433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4233601Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4234038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4234184Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4234723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4234884Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4235436Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4235567Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4236098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:15.4236458Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:15.4237080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.4237329Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.4237337Z 2025-08-14T21:44:15.4237519Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4237907Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4238027Z return mod(**inputs) 2025-08-14T21:44:15.4238570Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4238693Z outputs = self.deberta( 2025-08-14T21:44:15.4239234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4239366Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4239933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4240093Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4240535Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4240683Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4241228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4241400Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4242307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4242451Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4243197Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.4243669Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.4243677Z 2025-08-14T21:44:15.4243934Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4244323Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4244438Z return mod(**inputs) 2025-08-14T21:44:15.4245018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4245132Z outputs = self.deberta( 2025-08-14T21:44:15.4245683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4245813Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4246361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4246526Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4246973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4247114Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4247674Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4247839Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4248403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4248538Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4249084Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.4249504Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.4249514Z 2025-08-14T21:44:15.4249662Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4249870Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4250251Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4250362Z return mod(**inputs) 2025-08-14T21:44:15.4250932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4251054Z outputs = self.deberta( 2025-08-14T21:44:15.4251609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4251744Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4252292Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4252457Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4252898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4253045Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4253606Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4253769Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4254314Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4254448Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4255073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:15.4255460Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:15.4256116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.4256396Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.4256414Z 2025-08-14T21:44:15.4256608Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4257007Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4257133Z return mod(**inputs) 2025-08-14T21:44:15.4257702Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4257828Z outputs = self.deberta( 2025-08-14T21:44:15.4258399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4258535Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4259100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4259260Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4259802Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4259961Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4260513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4260681Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4261242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4261382Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4261940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:15.4262062Z context_layer = torch.bmm( 2025-08-14T21:44:15.4262070Z 2025-08-14T21:44:15.4262267Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4262663Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4262777Z return mod(**inputs) 2025-08-14T21:44:15.4263349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4263467Z outputs = self.deberta( 2025-08-14T21:44:15.4264005Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4264153Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4264699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4264861Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4265309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4265455Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4266015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4266178Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4266729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4266863Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4267480Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:15.4267878Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:15.4267913Z 2025-08-14T21:44:15.4268056Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4268196Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4268400Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4268792Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4268907Z return mod(**inputs) 2025-08-14T21:44:15.4269493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4269615Z outputs = self.deberta( 2025-08-14T21:44:15.4270172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4270301Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4270860Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4271030Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4271476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4271628Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4272165Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:15.4272393Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:15.4272948Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:15.4273161Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:15.4273590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:15.4273718Z return self.act(input) 2025-08-14T21:44:15.4273729Z 2025-08-14T21:44:15.4273870Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4274008Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4274143Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4274337Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4274741Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4274846Z return mod(**inputs) 2025-08-14T21:44:15.4275399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4275522Z outputs = self.deberta( 2025-08-14T21:44:15.4276066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4276201Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4276730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4276880Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4277325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4277457Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4277979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4278156Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4278692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4278895Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4279459Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:15.4279838Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:15.4280458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.4280705Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.4280712Z 2025-08-14T21:44:15.4280915Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4281309Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4281426Z return mod(**inputs) 2025-08-14T21:44:15.4282000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4282124Z outputs = self.deberta( 2025-08-14T21:44:15.4282683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4282816Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4283364Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4283527Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4283982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4284118Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4284676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4284845Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4285389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4285525Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4286058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.4286466Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.4286473Z 2025-08-14T21:44:15.4286669Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4287074Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4287193Z return mod(**inputs) 2025-08-14T21:44:15.4287765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4287895Z outputs = self.deberta( 2025-08-14T21:44:15.4288443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4288578Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4289139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4289293Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4289726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4289864Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4290402Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4290580Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4291188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4291363Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4291941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.4292348Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.4292356Z 2025-08-14T21:44:15.4292508Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4292699Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4293104Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4293213Z return mod(**inputs) 2025-08-14T21:44:15.4293763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4293894Z outputs = self.deberta( 2025-08-14T21:44:15.4294443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4294577Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4295126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4295276Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4295721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4295861Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4296403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4296588Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4297140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4297273Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4297833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:15.4298203Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:15.4298831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.4299072Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.4299080Z 2025-08-14T21:44:15.4299277Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4299788Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4299910Z return mod(**inputs) 2025-08-14T21:44:15.4300480Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4300601Z outputs = self.deberta( 2025-08-14T21:44:15.4301164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4301306Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4301855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4302019Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4302453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4302590Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4303232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4303428Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4304010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4304156Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4304699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:15.4304824Z context_layer = torch.bmm( 2025-08-14T21:44:15.4304832Z 2025-08-14T21:44:15.4305023Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4305411Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4305529Z return mod(**inputs) 2025-08-14T21:44:15.4306097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4306231Z outputs = self.deberta( 2025-08-14T21:44:15.4306783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4306914Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4307467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4307620Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4308070Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4308219Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4308772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4308947Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4309481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4309617Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4310170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:15.4310519Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:15.4310528Z 2025-08-14T21:44:15.4310678Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4310818Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4311007Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4311394Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4311510Z return mod(**inputs) 2025-08-14T21:44:15.4312063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4312199Z outputs = self.deberta( 2025-08-14T21:44:15.4312754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4312893Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4313443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4313598Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4314061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4314201Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4314808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:15.4315069Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:15.4315601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:15.4315836Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:15.4316255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:15.4316377Z return self.act(input) 2025-08-14T21:44:15.4316384Z 2025-08-14T21:44:15.4316528Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4316668Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4316803Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4316997Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4317387Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4317515Z return mod(**inputs) 2025-08-14T21:44:15.4318076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4318201Z outputs = self.deberta( 2025-08-14T21:44:15.4318775Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4318908Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4319472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4319619Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4320057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4320203Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4320734Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4320899Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4321453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4321594Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4322148Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:15.4322508Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:15.4323129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.4323385Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.4323392Z 2025-08-14T21:44:15.4323578Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4323973Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4324085Z return mod(**inputs) 2025-08-14T21:44:15.4324624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4324752Z outputs = self.deberta( 2025-08-14T21:44:15.4325288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4325428Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4325972Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4326127Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4326651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4326820Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4327381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4327586Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4328136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4328278Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4328828Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.4329247Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.4329254Z 2025-08-14T21:44:15.4329463Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4329852Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4329977Z return mod(**inputs) 2025-08-14T21:44:15.4330551Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4330670Z outputs = self.deberta( 2025-08-14T21:44:15.4331234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4331367Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4331908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4332066Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4332519Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4332665Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4333221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4333396Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4333949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4334086Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4334639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.4335051Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.4335058Z 2025-08-14T21:44:15.4335204Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4335410Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4335800Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4335914Z return mod(**inputs) 2025-08-14T21:44:15.4336480Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4336597Z outputs = self.deberta( 2025-08-14T21:44:15.4337153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4337280Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4337831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4337997Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4338500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4338674Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4339222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4339536Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4340101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4340238Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4340791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:15.4341169Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:15.4342049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.4342326Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.4342333Z 2025-08-14T21:44:15.4342530Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4342922Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4343041Z return mod(**inputs) 2025-08-14T21:44:15.4343609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4343736Z outputs = self.deberta( 2025-08-14T21:44:15.4344299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4344429Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4345001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4345160Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4345613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4345760Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4346302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4346484Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4347027Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4347165Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4347723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:15.4347855Z context_layer = torch.bmm( 2025-08-14T21:44:15.4347862Z 2025-08-14T21:44:15.4348057Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4348450Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4348566Z return mod(**inputs) 2025-08-14T21:44:15.4349130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4349248Z outputs = self.deberta( 2025-08-14T21:44:15.4349798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4349929Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4350481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4350822Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4351273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4351464Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4352072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4352250Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4352852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4352984Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4353545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:15.4353927Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:15.4353936Z 2025-08-14T21:44:15.4354082Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4354238Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4354432Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4354823Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4354955Z return mod(**inputs) 2025-08-14T21:44:15.4355507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4355621Z outputs = self.deberta( 2025-08-14T21:44:15.4356192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4356324Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4356887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4357049Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4357502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4357656Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4358227Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:15.4358450Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:15.4358993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:15.4359193Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:15.4359609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:15.4359732Z return self.act(input) 2025-08-14T21:44:15.4359739Z 2025-08-14T21:44:15.4359890Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4360046Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4360184Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4360386Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4360784Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4360901Z return mod(**inputs) 2025-08-14T21:44:15.4361468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4361589Z outputs = self.deberta( 2025-08-14T21:44:15.4362146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4362288Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4362908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4363070Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4363546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4363704Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4364235Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4364393Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4364924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4365064Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4365595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:15.4365967Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:15.4366591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.4366836Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.4366853Z 2025-08-14T21:44:15.4367038Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4367422Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4367538Z return mod(**inputs) 2025-08-14T21:44:15.4368097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4368220Z outputs = self.deberta( 2025-08-14T21:44:15.4368791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4368923Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4369492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4369644Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4370093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4370237Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4370771Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4370938Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4371472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4371609Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4372142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.4372541Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.4372551Z 2025-08-14T21:44:15.4372740Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4373138Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4373255Z return mod(**inputs) 2025-08-14T21:44:15.4373832Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4373948Z outputs = self.deberta( 2025-08-14T21:44:15.4374503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4374703Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4375270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4375485Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4375944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4376089Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4376659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4376830Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4377376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4377524Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4378094Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.4378518Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.4378528Z 2025-08-14T21:44:15.4378674Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4378867Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4379272Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4379385Z return mod(**inputs) 2025-08-14T21:44:15.4380073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4380212Z outputs = self.deberta( 2025-08-14T21:44:15.4380758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4380907Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4381449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4381611Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4382079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4382221Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4382790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4382959Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4383514Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4383663Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4384219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:15.4384594Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:15.4385225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.4385482Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.4385490Z 2025-08-14T21:44:15.4385687Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4386102Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4386209Z return mod(**inputs) 2025-08-14T21:44:15.4386751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4386961Z outputs = self.deberta( 2025-08-14T21:44:15.4387529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4387724Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4388347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4388518Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4388970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4389123Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4389677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4389850Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4390438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4390580Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4391114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:15.4391248Z context_layer = torch.bmm( 2025-08-14T21:44:15.4391256Z 2025-08-14T21:44:15.4391450Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4391850Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4391955Z return mod(**inputs) 2025-08-14T21:44:15.4392491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4392620Z outputs = self.deberta( 2025-08-14T21:44:15.4393153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4393278Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4393824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4393981Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4394424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4394558Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4395093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4395264Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4395791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4395937Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4396471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:15.4396837Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:15.4396845Z 2025-08-14T21:44:15.4396995Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4397122Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4397320Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4397709Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4397818Z return mod(**inputs) 2025-08-14T21:44:15.4398362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4398478Z outputs = self.deberta( 2025-08-14T21:44:15.4399067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4399231Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4399766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4399954Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4400385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4400525Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4401063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:15.4401285Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:15.4401826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:15.4402038Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:15.4402452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:15.4402583Z return self.act(input) 2025-08-14T21:44:15.4402590Z 2025-08-14T21:44:15.4402731Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4402857Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4402996Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4403189Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4403575Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4403695Z return mod(**inputs) 2025-08-14T21:44:15.4404251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4404383Z outputs = self.deberta( 2025-08-14T21:44:15.4404907Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4405039Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4405580Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4405726Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4406167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4406304Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4406834Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4407005Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4407539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4407674Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4408208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:15.4408564Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:15.4409178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.4409420Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.4409427Z 2025-08-14T21:44:15.4409615Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4410006Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4410182Z return mod(**inputs) 2025-08-14T21:44:15.4410729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4410892Z outputs = self.deberta( 2025-08-14T21:44:15.4411461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4411595Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4412132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4412287Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4412719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4412861Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4413408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4413573Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4414113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4414261Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4414792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.4415205Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.4415212Z 2025-08-14T21:44:15.4415399Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4415773Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4415893Z return mod(**inputs) 2025-08-14T21:44:15.4416442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4416569Z outputs = self.deberta( 2025-08-14T21:44:15.4417110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4417241Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4417788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4417938Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4418369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4418515Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4419059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4419225Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4419901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4420045Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4420612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.4421029Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.4421036Z 2025-08-14T21:44:15.4421187Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4421381Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4421779Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4421905Z return mod(**inputs) 2025-08-14T21:44:15.4422548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4422698Z outputs = self.deberta( 2025-08-14T21:44:15.4423261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4423888Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4424436Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4424587Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4425033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4425186Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4425742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4425915Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4426457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4426594Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4427157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:15.4427522Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:15.4428150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.4428405Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.4428413Z 2025-08-14T21:44:15.4428602Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4429005Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4429119Z return mod(**inputs) 2025-08-14T21:44:15.4429672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4429804Z outputs = self.deberta( 2025-08-14T21:44:15.4430354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4430493Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4431041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4431191Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4431653Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4431792Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4432344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4432523Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4433081Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4433222Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4433762Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:15.4433886Z context_layer = torch.bmm( 2025-08-14T21:44:15.4433892Z 2025-08-14T21:44:15.4434094Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4434575Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4434702Z return mod(**inputs) 2025-08-14T21:44:15.4435295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4435442Z outputs = self.deberta( 2025-08-14T21:44:15.4436003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4436127Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4436684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4436848Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4437296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4437447Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4437997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4438169Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4438726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4438859Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4439415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:15.4439792Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:15.4439801Z 2025-08-14T21:44:15.4439947Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4440091Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4440275Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4440676Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4440804Z return mod(**inputs) 2025-08-14T21:44:15.4441363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4441496Z outputs = self.deberta( 2025-08-14T21:44:15.4443354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4443605Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4443992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4444099Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4444381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4444504Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4444815Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:15.4444987Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:15.4445309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:15.4445436Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:15.4445695Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:15.4445778Z return self.act(input) 2025-08-14T21:44:15.4445785Z 2025-08-14T21:44:15.4445892Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4445980Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4446066Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4446591Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4446846Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4446975Z return mod(**inputs) 2025-08-14T21:44:15.4447310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4447436Z outputs = self.deberta( 2025-08-14T21:44:15.4447747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4447833Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4448140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4448249Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4448500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4448598Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4448912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4449023Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4449326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4449413Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4449723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:15.4449944Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:15.4450291Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.4450451Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.4450458Z 2025-08-14T21:44:15.4450577Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4450807Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4450892Z return mod(**inputs) 2025-08-14T21:44:15.4451202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4451292Z outputs = self.deberta( 2025-08-14T21:44:15.4451593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4451674Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4451985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4452083Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4452326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4452425Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4452726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4452838Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4453145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4453230Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4453665Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.4453997Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.4454022Z 2025-08-14T21:44:15.4454151Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4454375Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4454471Z return mod(**inputs) 2025-08-14T21:44:15.4454794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4454871Z outputs = self.deberta( 2025-08-14T21:44:15.4455179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4455260Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4455564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4455672Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4455918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4456010Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4456314Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4456417Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4456718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4456805Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4457108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.4457345Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.4457352Z 2025-08-14T21:44:15.4457441Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4457568Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4457800Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4457879Z return mod(**inputs) 2025-08-14T21:44:15.4458187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4458262Z outputs = self.deberta( 2025-08-14T21:44:15.4458564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4458652Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4458955Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4459062Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4459310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4459397Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4459927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4460041Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4460355Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4460443Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4460744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:15.4461020Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:15.4461375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.4461545Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.4461578Z 2025-08-14T21:44:15.4461699Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4461922Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4462003Z return mod(**inputs) 2025-08-14T21:44:15.4462305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4462381Z outputs = self.deberta( 2025-08-14T21:44:15.4462701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4462784Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4463090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4463191Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4463437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4463532Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4463859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4463961Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4464267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4464354Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4464666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:15.4464751Z context_layer = torch.bmm( 2025-08-14T21:44:15.4464755Z 2025-08-14T21:44:15.4464869Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4465095Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4465173Z return mod(**inputs) 2025-08-14T21:44:15.4465479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4465554Z outputs = self.deberta( 2025-08-14T21:44:15.4465861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4465950Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4466268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4466364Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4466619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4466708Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4467007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4467112Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4467416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4467511Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4467811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:15.4468111Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:15.4468139Z 2025-08-14T21:44:15.4468231Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4468316Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4468461Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4468685Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4468765Z return mod(**inputs) 2025-08-14T21:44:15.4469083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4469164Z outputs = self.deberta( 2025-08-14T21:44:15.4469471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4469551Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4469876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4469988Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4470232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4470327Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4470624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:15.4470761Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:15.4471076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:15.4471203Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:15.4471442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:15.4471528Z return self.act(input) 2025-08-14T21:44:15.4471535Z 2025-08-14T21:44:15.4471623Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4471715Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4471803Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4471917Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4472137Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4489330Z return mod(**inputs) 2025-08-14T21:44:15.4489857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4489947Z outputs = self.deberta( 2025-08-14T21:44:15.4490283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4490400Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4490715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4490837Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4491096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4491197Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4491503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4491611Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4491923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4492012Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4492466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:15.4492717Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:15.4493090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.4493253Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.4493262Z 2025-08-14T21:44:15.4493390Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4493631Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4493712Z return mod(**inputs) 2025-08-14T21:44:15.4494018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4494115Z outputs = self.deberta( 2025-08-14T21:44:15.4494410Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4494497Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4494803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4494903Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4495159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4495248Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4495543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4495659Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4495955Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4496050Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4496345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.4496587Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.4496592Z 2025-08-14T21:44:15.4496718Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4496942Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4497025Z return mod(**inputs) 2025-08-14T21:44:15.4497327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4497404Z outputs = self.deberta( 2025-08-14T21:44:15.4497713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4497795Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4498089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4498193Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4498436Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4498531Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4498823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4498926Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4499261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4499350Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4499770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.4500037Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.4500042Z 2025-08-14T21:44:15.4500139Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4500263Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4500486Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4500561Z return mod(**inputs) 2025-08-14T21:44:15.4500875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4500953Z outputs = self.deberta( 2025-08-14T21:44:15.4501259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4501343Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4501636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4501741Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4501983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4502069Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4502372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4502474Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4502778Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4502862Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4503154Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:15.4503375Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:15.4503713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.4503867Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.4503872Z 2025-08-14T21:44:15.4503986Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4504204Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4504285Z return mod(**inputs) 2025-08-14T21:44:15.4504588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4504673Z outputs = self.deberta( 2025-08-14T21:44:15.4504968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4505051Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4505352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4505447Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4505687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4505782Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4506150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4506273Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4506588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4506694Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4506999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:15.4507079Z context_layer = torch.bmm( 2025-08-14T21:44:15.4507084Z 2025-08-14T21:44:15.4507205Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4507427Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4507500Z return mod(**inputs) 2025-08-14T21:44:15.4507808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4507888Z outputs = self.deberta( 2025-08-14T21:44:15.4508181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4508270Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4508566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4508666Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4508907Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4508994Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4509298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4509397Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4509703Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4509788Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4510081Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:15.4510299Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:15.4510303Z 2025-08-14T21:44:15.4510393Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4510481Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4510603Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4510825Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4510905Z return mod(**inputs) 2025-08-14T21:44:15.4511211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4511286Z outputs = self.deberta( 2025-08-14T21:44:15.4511594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4511675Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4511973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4512075Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4512317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4512413Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4512708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:15.4512888Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:15.4513197Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:15.4513345Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:15.4513603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:15.4513683Z return self.act(input) 2025-08-14T21:44:15.4513688Z 2025-08-14T21:44:15.4513777Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4513873Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4513957Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4514069Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4514299Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4514371Z return mod(**inputs) 2025-08-14T21:44:15.4514681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4514758Z outputs = self.deberta( 2025-08-14T21:44:15.4515049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4515141Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4515432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4515525Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4515773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4515858Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4516166Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4516269Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4516575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4516670Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4516962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:15.4517180Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:15.4517516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.4517663Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.4517666Z 2025-08-14T21:44:15.4517788Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4518008Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4518090Z return mod(**inputs) 2025-08-14T21:44:15.4518386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4518462Z outputs = self.deberta( 2025-08-14T21:44:15.4518763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4518843Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4519134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4519235Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4519478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4519609Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4519906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4520027Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4520366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4520450Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4520761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.4520993Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.4520998Z 2025-08-14T21:44:15.4521111Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4521339Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4521412Z return mod(**inputs) 2025-08-14T21:44:15.4521715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4521801Z outputs = self.deberta( 2025-08-14T21:44:15.4522101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4522189Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4522491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4522584Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4522836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4522922Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4523231Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4523333Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4523624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4523718Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4524022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.4524244Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.4524257Z 2025-08-14T21:44:15.4524346Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4524459Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4524689Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4524763Z return mod(**inputs) 2025-08-14T21:44:15.4525064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4525149Z outputs = self.deberta( 2025-08-14T21:44:15.4525451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4525541Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4525833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4525925Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4526174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4526296Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4526602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4526731Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4527052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4527144Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4527449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:15.4527659Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:15.4528009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.4528155Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.4528159Z 2025-08-14T21:44:15.4528285Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4528509Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4528585Z return mod(**inputs) 2025-08-14T21:44:15.4528897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4528976Z outputs = self.deberta( 2025-08-14T21:44:15.4529288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4529368Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4529662Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4529769Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4530008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4530098Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4530413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4530513Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4530829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4530913Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4531217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:15.4531309Z context_layer = torch.bmm( 2025-08-14T21:44:15.4531313Z 2025-08-14T21:44:15.4531430Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4531656Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4531729Z return mod(**inputs) 2025-08-14T21:44:15.4532026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4532111Z outputs = self.deberta( 2025-08-14T21:44:15.4532402Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4532482Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4532800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4532893Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4533171Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4533260Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4533574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4533711Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4534006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4534094Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4534399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:15.4534604Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:15.4534608Z 2025-08-14T21:44:15.4534701Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4534787Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4534904Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4535135Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4535206Z return mod(**inputs) 2025-08-14T21:44:15.4535517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4535593Z outputs = self.deberta( 2025-08-14T21:44:15.4535900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4535987Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4536292Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4536385Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4536634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4536723Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4537023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:15.4537158Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:15.4537450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:15.4537583Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:15.4537812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:15.4537898Z return self.act(input) 2025-08-14T21:44:15.4537902Z 2025-08-14T21:44:15.4537990Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4538075Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4538173Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4538288Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4538511Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4538594Z return mod(**inputs) 2025-08-14T21:44:15.4538893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4538976Z outputs = self.deberta( 2025-08-14T21:44:15.4539270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4539349Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4539747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4539849Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4540140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4540260Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4540556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4540688Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4540980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4541065Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4541368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:15.4541571Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:15.4542609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.4542891Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.4542902Z 2025-08-14T21:44:15.4543068Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4543334Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4543414Z return mod(**inputs) 2025-08-14T21:44:15.4543760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4543844Z outputs = self.deberta( 2025-08-14T21:44:15.4544158Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4544250Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4544560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4544666Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4544932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4545027Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4545345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4545454Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4545761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4545859Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4546205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.4546460Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.4546467Z 2025-08-14T21:44:15.4546592Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4546831Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4546915Z return mod(**inputs) 2025-08-14T21:44:15.4547222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4547301Z outputs = self.deberta( 2025-08-14T21:44:15.4547617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4547701Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4548297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4548446Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4548707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4548853Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4549162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4549279Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4549585Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4549675Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4550010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.4550247Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.4550253Z 2025-08-14T21:44:15.4550350Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4550473Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4550697Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4550781Z return mod(**inputs) 2025-08-14T21:44:15.4551109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4551185Z outputs = self.deberta( 2025-08-14T21:44:15.4551506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4551587Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4551912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4552011Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4552259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4552357Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4552668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4552779Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4553091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4553176Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4553503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:15.4553745Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:15.4554101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.4554264Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.4554268Z 2025-08-14T21:44:15.4554384Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4554618Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4554691Z return mod(**inputs) 2025-08-14T21:44:15.4555001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4555085Z outputs = self.deberta( 2025-08-14T21:44:15.4555440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4555596Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4555894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4556011Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4556263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4556351Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4556652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4556754Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4557049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4557142Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4557438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:15.4557522Z context_layer = torch.bmm( 2025-08-14T21:44:15.4557526Z 2025-08-14T21:44:15.4557647Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4557865Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4557943Z return mod(**inputs) 2025-08-14T21:44:15.4558252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4558329Z outputs = self.deberta( 2025-08-14T21:44:15.4558643Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4558731Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4559032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4559140Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4559385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4559483Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4559786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4559887Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4560200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4560282Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4560588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:15.4560803Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:15.4560808Z 2025-08-14T21:44:15.4560902Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4560996Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4561112Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4561331Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4561412Z return mod(**inputs) 2025-08-14T21:44:15.4561714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4561798Z outputs = self.deberta( 2025-08-14T21:44:15.4562136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4562221Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4562547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4562665Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4562910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4562995Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4563288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:15.4563432Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:15.4563730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:15.4563860Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:15.4564101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:15.4564180Z return self.act(input) 2025-08-14T21:44:15.4564184Z 2025-08-14T21:44:15.4564280Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4564364Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4564445Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4564566Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4564784Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4564855Z return mod(**inputs) 2025-08-14T21:44:15.4565160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4565232Z outputs = self.deberta( 2025-08-14T21:44:15.4565536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4565620Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4565915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4566019Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4566258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4566352Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4566643Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4566745Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4567051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4567137Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4567434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-08-14T21:44:15.4567655Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-08-14T21:44:15.4567989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.4568144Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.4568149Z 2025-08-14T21:44:15.4568261Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4568477Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4568559Z return mod(**inputs) 2025-08-14T21:44:15.4568904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4569028Z outputs = self.deberta( 2025-08-14T21:44:15.4569325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4569428Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4569739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4569835Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4570075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4570172Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4570465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4570582Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4570887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4570976Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4571280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.4571519Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.4571524Z 2025-08-14T21:44:15.4571645Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4571865Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4571941Z return mod(**inputs) 2025-08-14T21:44:15.4572251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4572325Z outputs = self.deberta( 2025-08-14T21:44:15.4572630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4572712Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4573004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4573104Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4573344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4573430Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4573730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4573836Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4574148Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4574233Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4574530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-08-14T21:44:15.4574769Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-08-14T21:44:15.4574774Z 2025-08-14T21:44:15.4574862Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4574980Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4575198Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4575272Z return mod(**inputs) 2025-08-14T21:44:15.4575621Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4575699Z outputs = self.deberta( 2025-08-14T21:44:15.4576017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4576127Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4576422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4576523Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4576763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4576848Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4577151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4577251Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4577573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4577659Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4577953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-08-14T21:44:15.4578172Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-08-14T21:44:15.4578511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-08-14T21:44:15.4578655Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-08-14T21:44:15.4578668Z 2025-08-14T21:44:15.4578780Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4579002Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4579083Z return mod(**inputs) 2025-08-14T21:44:15.4579387Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4579626Z outputs = self.deberta( 2025-08-14T21:44:15.4579974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4580059Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4580367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4580463Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4580708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4580804Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4581107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4581209Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4581518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4581608Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4581913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-08-14T21:44:15.4581998Z context_layer = torch.bmm( 2025-08-14T21:44:15.4582003Z 2025-08-14T21:44:15.4582116Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4582344Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4582416Z return mod(**inputs) 2025-08-14T21:44:15.4582771Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4582865Z outputs = self.deberta( 2025-08-14T21:44:15.4583160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4583273Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4583572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4583665Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4583912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4584000Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4584308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-08-14T21:44:15.4584409Z attention_output, att_matrix = self.attention( 2025-08-14T21:44:15.4584697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-08-14T21:44:15.4584789Z self_output, att_matrix = self.self( 2025-08-14T21:44:15.4585084Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-08-14T21:44:15.4585302Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-08-14T21:44:15.4585306Z 2025-08-14T21:44:15.4585398Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4585491Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4585613Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4585830Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4585906Z return mod(**inputs) 2025-08-14T21:44:15.4586216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-08-14T21:44:15.4586296Z outputs = self.deberta( 2025-08-14T21:44:15.4586603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-08-14T21:44:15.4586681Z encoder_outputs = self.encoder( 2025-08-14T21:44:15.4586976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-08-14T21:44:15.4587083Z output_states, attn_weights = layer_module( 2025-08-14T21:44:15.4587324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:15.4587423Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:15.4587723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 446, in forward 2025-08-14T21:44:15.4587856Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:44:15.4588155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 401, in forward 2025-08-14T21:44:15.4588278Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:44:15.4588504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:15.4588587Z return self.act(input) 2025-08-14T21:44:15.4588591Z 2025-08-14T21:44:15.4588678Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4588772Z cudagraph partition due to non gpu ops 2025-08-14T21:44:15.4588884Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4589114Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4589196Z return mod(**inputs) 2025-08-14T21:44:15.4589547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1262, in forward 2025-08-14T21:44:15.4589685Z start_loss = loss_fct(start_logits, start_positions) 2025-08-14T21:44:15.4589717Z 2025-08-14T21:44:15.4589832Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:15.4590048Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:15.4590136Z return mod(**inputs) 2025-08-14T21:44:15.4590434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1263, in forward 2025-08-14T21:44:15.4590532Z end_loss = loss_fct(end_logits, end_positions) 2025-08-14T21:44:15.4590537Z 2025-08-14T21:44:29.5061095Z Compilation time (from dynamo_timed): 31.667615766 2025-08-14T21:44:29.5063168Z pass 2025-08-14T21:44:29.5063511Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:44:29.5064304Z TIMING: _recursive_pre_grad_passes:0.09241 _recursive_joint_graph_passes:1.17778 _recursive_post_grad_passes:0.29682 async_compile.wait:0.58174 code_gen:12.66822 inductor_compile:16.36615 backend_compile:26.30151 gc:0.00014 entire_frame_compile:31.66762 total_wall_time:31.66762 2025-08-14T21:44:29.5065253Z STATS: call_* op count: 1087 | FakeTensorMode.__torch_dispatch__:57230 | FakeTensor.__torch_dispatch__:9191 | ProxyTorchDispatchMode.__torch_dispatch__:13100 2025-08-14T21:44:29.5065887Z Dynamo produced 1 graphs covering 1087 ops with 0 graph breaks (0 unique) 2025-08-14T21:44:36.0034943Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:44:36.0035977Z from pkg_resources import resource_filename 2025-08-14T21:44:36.5973301Z 2025-08-14T21:44:37.3366793Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:44:37.3371627Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:44:37.3374303Z cpu eval DistilBertForMaskedLM 2025-08-14T21:44:37.7236954Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:44:37.7848876Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:44:37.8463176Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:44:43.7940036Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.7940398Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.7946163Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.7951406Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.7957233Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.7961431Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.7961720Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.7961987Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.7962204Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.7962438Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.7962660Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.7962875Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.7963098Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.7963364Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:43.7963782Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:43.7964151Z return mod(**inputs) 2025-08-14T21:44:43.7964622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 826, in forward 2025-08-14T21:44:43.7965089Z dlbrt_output = self.distilbert( 2025-08-14T21:44:43.7965901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:44:43.7966434Z return self.transformer( 2025-08-14T21:44:43.7966870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:44:43.7967377Z layer_outputs = layer_module( 2025-08-14T21:44:43.7967748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:43.7968146Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:43.7968592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 476, in forward 2025-08-14T21:44:43.7969033Z sa_output = self.attention( 2025-08-14T21:44:43.7969455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 402, in forward 2025-08-14T21:44:43.7969960Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:44:43.7970181Z 2025-08-14T21:44:43.7970279Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.7970498Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.7970752Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:43.7971150Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:43.7971480Z return mod(**inputs) 2025-08-14T21:44:43.7971868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 826, in forward 2025-08-14T21:44:43.7972287Z dlbrt_output = self.distilbert( 2025-08-14T21:44:43.7972700Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:44:43.7973104Z return self.transformer( 2025-08-14T21:44:43.7973509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:44:43.7973922Z layer_outputs = layer_module( 2025-08-14T21:44:43.7974271Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:43.7974633Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:43.7975048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 494, in forward 2025-08-14T21:44:43.7975497Z ffn_output = self.ffn(sa_output) # (bs, seq_length, dim) 2025-08-14T21:44:43.7975945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 428, in forward 2025-08-14T21:44:43.7976475Z return apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, input) 2025-08-14T21:44:43.7977066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:44:43.7977460Z return forward_fn(*input_tensors) 2025-08-14T21:44:43.7977881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 432, in ff_chunk 2025-08-14T21:44:43.7978293Z x = self.activation(x) 2025-08-14T21:44:43.7978616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:43.7978958Z return self.act(input) 2025-08-14T21:44:43.7979076Z 2025-08-14T21:44:43.7979158Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.7979371Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.7979748Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.7979962Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.7980176Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.7980478Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.7980706Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.7980958Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.7981191Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:43.7981602Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:43.7981939Z return mod(**inputs) 2025-08-14T21:44:43.7982335Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 826, in forward 2025-08-14T21:44:43.7982743Z dlbrt_output = self.distilbert( 2025-08-14T21:44:43.7983161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:44:43.7983576Z return self.transformer( 2025-08-14T21:44:43.7983970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:44:43.7984385Z layer_outputs = layer_module( 2025-08-14T21:44:43.7984742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:43.7985108Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:43.7985521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 476, in forward 2025-08-14T21:44:43.7985933Z sa_output = self.attention( 2025-08-14T21:44:43.7986338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 402, in forward 2025-08-14T21:44:43.7986812Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:44:43.7987000Z 2025-08-14T21:44:43.7987079Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.7987301Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.7987533Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:43.7987884Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:43.7988215Z return mod(**inputs) 2025-08-14T21:44:43.7988597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 826, in forward 2025-08-14T21:44:43.7989002Z dlbrt_output = self.distilbert( 2025-08-14T21:44:43.7989392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:44:43.7989793Z return self.transformer( 2025-08-14T21:44:43.7990180Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:44:43.7990572Z layer_outputs = layer_module( 2025-08-14T21:44:43.7990915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:43.7991269Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:43.7991674Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 494, in forward 2025-08-14T21:44:43.7992106Z ffn_output = self.ffn(sa_output) # (bs, seq_length, dim) 2025-08-14T21:44:43.7992541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 428, in forward 2025-08-14T21:44:43.7993063Z return apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, input) 2025-08-14T21:44:43.7993579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:44:43.7993970Z return forward_fn(*input_tensors) 2025-08-14T21:44:43.7994381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 432, in ff_chunk 2025-08-14T21:44:43.7994838Z x = self.activation(x) 2025-08-14T21:44:43.7995151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:43.7995501Z return self.act(input) 2025-08-14T21:44:43.7995642Z 2025-08-14T21:44:43.7995723Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.7995931Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.7996131Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.7996338Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.7996544Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.7996742Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.7996946Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.7997152Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.7997388Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:43.7997750Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:43.7998078Z return mod(**inputs) 2025-08-14T21:44:43.7998457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 826, in forward 2025-08-14T21:44:43.7998859Z dlbrt_output = self.distilbert( 2025-08-14T21:44:43.7999264Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:44:43.7999666Z return self.transformer( 2025-08-14T21:44:43.8000046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:44:43.8000446Z layer_outputs = layer_module( 2025-08-14T21:44:43.8000788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:43.8001142Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:43.8001548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 476, in forward 2025-08-14T21:44:43.8001961Z sa_output = self.attention( 2025-08-14T21:44:43.8002363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 402, in forward 2025-08-14T21:44:43.8002834Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:44:43.8003014Z 2025-08-14T21:44:43.8003091Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.8003300Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.8003531Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:43.8003881Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:43.8004209Z return mod(**inputs) 2025-08-14T21:44:43.8004602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 826, in forward 2025-08-14T21:44:43.8005026Z dlbrt_output = self.distilbert( 2025-08-14T21:44:43.8005418Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:44:43.8005822Z return self.transformer( 2025-08-14T21:44:43.8006212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:44:43.8006610Z layer_outputs = layer_module( 2025-08-14T21:44:43.8006966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:43.8007332Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:43.8007750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 494, in forward 2025-08-14T21:44:43.8008196Z ffn_output = self.ffn(sa_output) # (bs, seq_length, dim) 2025-08-14T21:44:43.8008725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 428, in forward 2025-08-14T21:44:43.8009285Z return apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, input) 2025-08-14T21:44:43.8009823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:44:43.8010214Z return forward_fn(*input_tensors) 2025-08-14T21:44:43.8010629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 432, in ff_chunk 2025-08-14T21:44:43.8011039Z x = self.activation(x) 2025-08-14T21:44:43.8011361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:43.8011702Z return self.act(input) 2025-08-14T21:44:43.8011820Z 2025-08-14T21:44:43.8011900Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.8012119Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.8012323Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.8012534Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.8012742Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.8012945Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.8013153Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.8013360Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.8013591Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:43.8013957Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:43.8014290Z return mod(**inputs) 2025-08-14T21:44:43.8014679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 826, in forward 2025-08-14T21:44:43.8015083Z dlbrt_output = self.distilbert( 2025-08-14T21:44:43.8015513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:44:43.8015928Z return self.transformer( 2025-08-14T21:44:43.8016321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:44:43.8016766Z layer_outputs = layer_module( 2025-08-14T21:44:43.8017120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:43.8017487Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:43.8017903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 476, in forward 2025-08-14T21:44:43.8018323Z sa_output = self.attention( 2025-08-14T21:44:43.8018724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 402, in forward 2025-08-14T21:44:43.8019196Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:44:43.8019382Z 2025-08-14T21:44:43.8019460Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.8019760Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.8020006Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:43.8020378Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:43.8020737Z return mod(**inputs) 2025-08-14T21:44:43.8021160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 826, in forward 2025-08-14T21:44:43.8021604Z dlbrt_output = self.distilbert( 2025-08-14T21:44:43.8022010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:44:43.8022425Z return self.transformer( 2025-08-14T21:44:43.8022874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:44:43.8023291Z layer_outputs = layer_module( 2025-08-14T21:44:43.8023668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:43.8024055Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:43.8024476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 494, in forward 2025-08-14T21:44:43.8024925Z ffn_output = self.ffn(sa_output) # (bs, seq_length, dim) 2025-08-14T21:44:43.8025378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 428, in forward 2025-08-14T21:44:43.8025925Z return apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, input) 2025-08-14T21:44:43.8026435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:44:43.8026816Z return forward_fn(*input_tensors) 2025-08-14T21:44:43.8027227Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 432, in ff_chunk 2025-08-14T21:44:43.8027634Z x = self.activation(x) 2025-08-14T21:44:43.8027947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:43.8028279Z return self.act(input) 2025-08-14T21:44:43.8028397Z 2025-08-14T21:44:43.8028473Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.8028681Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.8028879Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.8029086Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.8029290Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.8029482Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.8029685Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.8029894Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.8030120Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:43.8030489Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:43.8030817Z return mod(**inputs) 2025-08-14T21:44:43.8031200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 826, in forward 2025-08-14T21:44:43.8031601Z dlbrt_output = self.distilbert( 2025-08-14T21:44:43.8032007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:44:43.8032417Z return self.transformer( 2025-08-14T21:44:43.8032811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:44:43.8033208Z layer_outputs = layer_module( 2025-08-14T21:44:43.8033557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:43.8033915Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:43.8034315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 476, in forward 2025-08-14T21:44:43.8034721Z sa_output = self.attention( 2025-08-14T21:44:43.8035117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 402, in forward 2025-08-14T21:44:43.8035646Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:44:43.8035828Z 2025-08-14T21:44:43.8035905Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.8036111Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.8036345Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:43.8036732Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:43.8037065Z return mod(**inputs) 2025-08-14T21:44:43.8037470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 826, in forward 2025-08-14T21:44:43.8038775Z dlbrt_output = self.distilbert( 2025-08-14T21:44:43.8039171Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:44:43.8039577Z return self.transformer( 2025-08-14T21:44:43.8039968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:44:43.8040369Z layer_outputs = layer_module( 2025-08-14T21:44:43.8040715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:43.8041076Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:43.8041495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 494, in forward 2025-08-14T21:44:43.8042236Z ffn_output = self.ffn(sa_output) # (bs, seq_length, dim) 2025-08-14T21:44:43.8042684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 428, in forward 2025-08-14T21:44:43.8043213Z return apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, input) 2025-08-14T21:44:43.8043724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:44:43.8044107Z return forward_fn(*input_tensors) 2025-08-14T21:44:43.8044516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 432, in ff_chunk 2025-08-14T21:44:43.8044925Z x = self.activation(x) 2025-08-14T21:44:43.8045250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:43.8045579Z return self.act(input) 2025-08-14T21:44:43.8045695Z 2025-08-14T21:44:43.8045773Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.8045980Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.8046177Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.8046379Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.8046579Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.8046774Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.8046978Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.8047181Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.8047414Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:43.8047780Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:43.8048125Z return mod(**inputs) 2025-08-14T21:44:43.8048546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 826, in forward 2025-08-14T21:44:43.8048980Z dlbrt_output = self.distilbert( 2025-08-14T21:44:43.8049415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:44:43.8049835Z return self.transformer( 2025-08-14T21:44:43.8050236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:44:43.8050643Z layer_outputs = layer_module( 2025-08-14T21:44:43.8050996Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:43.8051365Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:43.8051776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 476, in forward 2025-08-14T21:44:43.8052317Z sa_output = self.attention( 2025-08-14T21:44:43.8052728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 402, in forward 2025-08-14T21:44:43.8053250Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:44:43.8053497Z 2025-08-14T21:44:43.8053578Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.8053790Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.8054031Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:43.8054387Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:43.8054715Z return mod(**inputs) 2025-08-14T21:44:43.8055111Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 826, in forward 2025-08-14T21:44:43.8055525Z dlbrt_output = self.distilbert( 2025-08-14T21:44:43.8055938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:44:43.8056353Z return self.transformer( 2025-08-14T21:44:43.8056753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:44:43.8057167Z layer_outputs = layer_module( 2025-08-14T21:44:43.8057516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:44:43.8057880Z return super().__call__(*args, **kwargs) 2025-08-14T21:44:43.8058299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 494, in forward 2025-08-14T21:44:43.8058743Z ffn_output = self.ffn(sa_output) # (bs, seq_length, dim) 2025-08-14T21:44:43.8059263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 428, in forward 2025-08-14T21:44:43.8059921Z return apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, input) 2025-08-14T21:44:43.8060481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:44:43.8060903Z return forward_fn(*input_tensors) 2025-08-14T21:44:43.8061349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 432, in ff_chunk 2025-08-14T21:44:43.8061800Z x = self.activation(x) 2025-08-14T21:44:43.8062134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:44:43.8062476Z return self.act(input) 2025-08-14T21:44:43.8062594Z 2025-08-14T21:44:43.8062676Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.8062892Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.8063096Z cudagraph partition due to non gpu ops 2025-08-14T21:44:43.8063339Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:44:43.8063709Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:44:43.8064033Z return mod(**inputs) 2025-08-14T21:44:43.8064427Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 843, in forward 2025-08-14T21:44:43.8064976Z mlm_loss = self.mlm_loss_fct(prediction_logits.view(-1, prediction_logits.size(-1)), labels.view(-1)) 2025-08-14T21:44:43.8065225Z 2025-08-14T21:44:51.6515861Z Compilation time (from dynamo_timed): 12.634259881 2025-08-14T21:44:51.6518458Z pass 2025-08-14T21:44:51.6518877Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:44:51.6524156Z TIMING: _recursive_pre_grad_passes:0.02202 _recursive_joint_graph_passes:0.26544 _recursive_post_grad_passes:0.05064 async_compile.wait:0.8208 code_gen:7.58505 inductor_compile:8.77021 backend_compile:11.18656 gc:0.00017 entire_frame_compile:12.63426 total_wall_time:12.63426 2025-08-14T21:44:51.6525333Z STATS: call_* op count: 153 | FakeTensorMode.__torch_dispatch__:12821 | FakeTensor.__torch_dispatch__:2081 | ProxyTorchDispatchMode.__torch_dispatch__:2801 2025-08-14T21:44:51.6526992Z Dynamo produced 1 graphs covering 153 ops with 0 graph breaks (0 unique) 2025-08-14T21:44:57.1317360Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:44:57.1318409Z from pkg_resources import resource_filename 2025-08-14T21:44:57.7238933Z 2025-08-14T21:44:58.3125314Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:44:58.3125615Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:44:58.3132724Z cpu eval DistilBertForQuestionAnswering 2025-08-14T21:44:58.6865163Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:44:58.7374026Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:44:58.7859125Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:45:04.7166732Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7167054Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7167279Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7167492Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7167702Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7167909Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7168108Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7168315Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7168568Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7168790Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7169024Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7169244Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7169493Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7169765Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:04.7170189Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:45:04.7170544Z return mod(**inputs) 2025-08-14T21:45:04.7171015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 1031, in forward 2025-08-14T21:45:04.7171458Z distilbert_output = self.distilbert( 2025-08-14T21:45:04.7171953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:45:04.7172411Z return self.transformer( 2025-08-14T21:45:04.7172855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:45:04.7173311Z layer_outputs = layer_module( 2025-08-14T21:45:04.7173689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:04.7174089Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:04.7174572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 476, in forward 2025-08-14T21:45:04.7175049Z sa_output = self.attention( 2025-08-14T21:45:04.7175475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 402, in forward 2025-08-14T21:45:04.7176006Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:45:04.7176235Z 2025-08-14T21:45:04.7176332Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7176973Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7177246Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:04.7177722Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:45:04.7178137Z return mod(**inputs) 2025-08-14T21:45:04.7178585Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 1031, in forward 2025-08-14T21:45:04.7179064Z distilbert_output = self.distilbert( 2025-08-14T21:45:04.7179688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:45:04.7180150Z return self.transformer( 2025-08-14T21:45:04.7180598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:45:04.7181049Z layer_outputs = layer_module( 2025-08-14T21:45:04.7181422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:04.7181822Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:04.7182266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 494, in forward 2025-08-14T21:45:04.7182750Z ffn_output = self.ffn(sa_output) # (bs, seq_length, dim) 2025-08-14T21:45:04.7183226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 428, in forward 2025-08-14T21:45:04.7183808Z return apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, input) 2025-08-14T21:45:04.7184372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:45:04.7184798Z return forward_fn(*input_tensors) 2025-08-14T21:45:04.7185234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 432, in ff_chunk 2025-08-14T21:45:04.7185672Z x = self.activation(x) 2025-08-14T21:45:04.7186020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:45:04.7186391Z return self.act(input) 2025-08-14T21:45:04.7186518Z 2025-08-14T21:45:04.7186605Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7186830Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7187051Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7187263Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7187478Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7187680Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7187874Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7188075Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7188316Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:04.7188669Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:45:04.7188994Z return mod(**inputs) 2025-08-14T21:45:04.7189378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 1031, in forward 2025-08-14T21:45:04.7189796Z distilbert_output = self.distilbert( 2025-08-14T21:45:04.7190200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:45:04.7190601Z return self.transformer( 2025-08-14T21:45:04.7190991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:45:04.7191388Z layer_outputs = layer_module( 2025-08-14T21:45:04.7191729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:04.7192140Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:04.7192569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 476, in forward 2025-08-14T21:45:04.7192993Z sa_output = self.attention( 2025-08-14T21:45:04.7193390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 402, in forward 2025-08-14T21:45:04.7193845Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:45:04.7194025Z 2025-08-14T21:45:04.7194108Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7194310Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7194540Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:04.7194895Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:45:04.7195208Z return mod(**inputs) 2025-08-14T21:45:04.7195598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 1031, in forward 2025-08-14T21:45:04.7196008Z distilbert_output = self.distilbert( 2025-08-14T21:45:04.7196414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:45:04.7196808Z return self.transformer( 2025-08-14T21:45:04.7197192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:45:04.7197592Z layer_outputs = layer_module( 2025-08-14T21:45:04.7197927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:04.7198284Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:04.7198688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 494, in forward 2025-08-14T21:45:04.7199127Z ffn_output = self.ffn(sa_output) # (bs, seq_length, dim) 2025-08-14T21:45:04.7199558Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 428, in forward 2025-08-14T21:45:04.7200082Z return apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, input) 2025-08-14T21:45:04.7200589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:45:04.7200973Z return forward_fn(*input_tensors) 2025-08-14T21:45:04.7201367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 432, in ff_chunk 2025-08-14T21:45:04.7201762Z x = self.activation(x) 2025-08-14T21:45:04.7202080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:45:04.7202403Z return self.act(input) 2025-08-14T21:45:04.7202518Z 2025-08-14T21:45:04.7202597Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7202802Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7203004Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7203200Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7203400Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7203603Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7203797Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7203999Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7204229Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:04.7204579Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:45:04.7204901Z return mod(**inputs) 2025-08-14T21:45:04.7205286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 1031, in forward 2025-08-14T21:45:04.7205745Z distilbert_output = self.distilbert( 2025-08-14T21:45:04.7206198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:45:04.7206620Z return self.transformer( 2025-08-14T21:45:04.7207014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:45:04.7207418Z layer_outputs = layer_module( 2025-08-14T21:45:04.7207755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:04.7208109Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:04.7208520Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 476, in forward 2025-08-14T21:45:04.7208915Z sa_output = self.attention( 2025-08-14T21:45:04.7209317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 402, in forward 2025-08-14T21:45:04.7209780Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:45:04.7209959Z 2025-08-14T21:45:04.7210046Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7210246Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7210480Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:04.7210838Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:45:04.7211153Z return mod(**inputs) 2025-08-14T21:45:04.7211538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 1031, in forward 2025-08-14T21:45:04.7211950Z distilbert_output = self.distilbert( 2025-08-14T21:45:04.7212361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:45:04.7212766Z return self.transformer( 2025-08-14T21:45:04.7213168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:45:04.7213586Z layer_outputs = layer_module( 2025-08-14T21:45:04.7213931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:04.7214298Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:04.7214715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 494, in forward 2025-08-14T21:45:04.7215167Z ffn_output = self.ffn(sa_output) # (bs, seq_length, dim) 2025-08-14T21:45:04.7215610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 428, in forward 2025-08-14T21:45:04.7216153Z return apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, input) 2025-08-14T21:45:04.7216677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:45:04.7217078Z return forward_fn(*input_tensors) 2025-08-14T21:45:04.7217485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 432, in ff_chunk 2025-08-14T21:45:04.7217899Z x = self.activation(x) 2025-08-14T21:45:04.7218224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:45:04.7218560Z return self.act(input) 2025-08-14T21:45:04.7218677Z 2025-08-14T21:45:04.7218757Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7218967Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7219177Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7219384Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7219811Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7220050Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7220321Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7220550Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7220843Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:04.7221199Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:45:04.7221530Z return mod(**inputs) 2025-08-14T21:45:04.7221926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 1031, in forward 2025-08-14T21:45:04.7222348Z distilbert_output = self.distilbert( 2025-08-14T21:45:04.7222758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:45:04.7223173Z return self.transformer( 2025-08-14T21:45:04.7223654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:45:04.7224155Z layer_outputs = layer_module( 2025-08-14T21:45:04.7224546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:04.7224925Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:04.7225343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 476, in forward 2025-08-14T21:45:04.7225747Z sa_output = self.attention( 2025-08-14T21:45:04.7226151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 402, in forward 2025-08-14T21:45:04.7226627Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:45:04.7226812Z 2025-08-14T21:45:04.7226911Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7227109Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7227335Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:04.7227688Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:45:04.7228004Z return mod(**inputs) 2025-08-14T21:45:04.7228389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 1031, in forward 2025-08-14T21:45:04.7228798Z distilbert_output = self.distilbert( 2025-08-14T21:45:04.7229206Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:45:04.7229602Z return self.transformer( 2025-08-14T21:45:04.7229992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:45:04.7230396Z layer_outputs = layer_module( 2025-08-14T21:45:04.7230734Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:04.7231090Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:04.7231495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 494, in forward 2025-08-14T21:45:04.7231934Z ffn_output = self.ffn(sa_output) # (bs, seq_length, dim) 2025-08-14T21:45:04.7232362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 428, in forward 2025-08-14T21:45:04.7233049Z return apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, input) 2025-08-14T21:45:04.7233565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:45:04.7233952Z return forward_fn(*input_tensors) 2025-08-14T21:45:04.7234394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 432, in ff_chunk 2025-08-14T21:45:04.7234817Z x = self.activation(x) 2025-08-14T21:45:04.7235137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:45:04.7235487Z return self.act(input) 2025-08-14T21:45:04.7235646Z 2025-08-14T21:45:04.7235728Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7235936Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7236142Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7236338Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7236541Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7236744Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7236941Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7237144Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7237412Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:04.7237780Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:45:04.7238095Z return mod(**inputs) 2025-08-14T21:45:04.7238482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 1031, in forward 2025-08-14T21:45:04.7238895Z distilbert_output = self.distilbert( 2025-08-14T21:45:04.7239299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:45:04.7239705Z return self.transformer( 2025-08-14T21:45:04.7240101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:45:04.7240505Z layer_outputs = layer_module( 2025-08-14T21:45:04.7240843Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:04.7241202Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:04.7241611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 476, in forward 2025-08-14T21:45:04.7242252Z sa_output = self.attention( 2025-08-14T21:45:04.7242676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 402, in forward 2025-08-14T21:45:04.7243161Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:45:04.7243352Z 2025-08-14T21:45:04.7243443Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7243655Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7243892Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:04.7244250Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:45:04.7244574Z return mod(**inputs) 2025-08-14T21:45:04.7244982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 1031, in forward 2025-08-14T21:45:04.7245413Z distilbert_output = self.distilbert( 2025-08-14T21:45:04.7245831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:45:04.7246237Z return self.transformer( 2025-08-14T21:45:04.7246657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:45:04.7247071Z layer_outputs = layer_module( 2025-08-14T21:45:04.7247423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:04.7247781Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:04.7248201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 494, in forward 2025-08-14T21:45:04.7248767Z ffn_output = self.ffn(sa_output) # (bs, seq_length, dim) 2025-08-14T21:45:04.7249215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 428, in forward 2025-08-14T21:45:04.7249795Z return apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, input) 2025-08-14T21:45:04.7250350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:45:04.7250747Z return forward_fn(*input_tensors) 2025-08-14T21:45:04.7251160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 432, in ff_chunk 2025-08-14T21:45:04.7251573Z x = self.activation(x) 2025-08-14T21:45:04.7251902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:45:04.7252241Z return self.act(input) 2025-08-14T21:45:04.7252350Z 2025-08-14T21:45:04.7252435Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7252647Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7252857Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7253056Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7253263Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7253469Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7253669Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7253875Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7254114Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:04.7254479Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:45:04.7254801Z return mod(**inputs) 2025-08-14T21:45:04.7255196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 1031, in forward 2025-08-14T21:45:04.7255629Z distilbert_output = self.distilbert( 2025-08-14T21:45:04.7256038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:45:04.7256451Z return self.transformer( 2025-08-14T21:45:04.7256849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:45:04.7257262Z layer_outputs = layer_module( 2025-08-14T21:45:04.7257605Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:04.7257970Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:04.7258383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 476, in forward 2025-08-14T21:45:04.7258781Z sa_output = self.attention( 2025-08-14T21:45:04.7259184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 402, in forward 2025-08-14T21:45:04.7259721Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:45:04.7259925Z 2025-08-14T21:45:04.7260018Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7260237Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7260491Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:04.7260879Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:45:04.7261219Z return mod(**inputs) 2025-08-14T21:45:04.7261622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 1031, in forward 2025-08-14T21:45:04.7262050Z distilbert_output = self.distilbert( 2025-08-14T21:45:04.7262473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-08-14T21:45:04.7262933Z return self.transformer( 2025-08-14T21:45:04.7263334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-08-14T21:45:04.7263769Z layer_outputs = layer_module( 2025-08-14T21:45:04.7264149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:04.7264495Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:04.7264913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 494, in forward 2025-08-14T21:45:04.7265338Z ffn_output = self.ffn(sa_output) # (bs, seq_length, dim) 2025-08-14T21:45:04.7265757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 428, in forward 2025-08-14T21:45:04.7266266Z return apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, input) 2025-08-14T21:45:04.7266762Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:45:04.7267146Z return forward_fn(*input_tensors) 2025-08-14T21:45:04.7267534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 432, in ff_chunk 2025-08-14T21:45:04.7267928Z x = self.activation(x) 2025-08-14T21:45:04.7268232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:45:04.7268554Z return self.act(input) 2025-08-14T21:45:04.7268657Z 2025-08-14T21:45:04.7268732Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7268929Z cudagraph partition due to non gpu ops 2025-08-14T21:45:04.7269151Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:04.7269486Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:45:04.7269801Z return mod(**inputs) 2025-08-14T21:45:04.7270169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 1061, in forward 2025-08-14T21:45:04.7270593Z start_loss = loss_fct(start_logits, start_positions) 2025-08-14T21:45:04.7270742Z 2025-08-14T21:45:04.7270842Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:04.7271181Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:45:04.7271488Z return mod(**inputs) 2025-08-14T21:45:04.7271852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 1062, in forward 2025-08-14T21:45:04.7272265Z end_loss = loss_fct(end_logits, end_positions) 2025-08-14T21:45:04.7272413Z 2025-08-14T21:45:12.1684965Z Compilation time (from dynamo_timed): 12.225534796 2025-08-14T21:45:12.1685263Z pass 2025-08-14T21:45:12.1685590Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:45:12.1686359Z TIMING: _recursive_pre_grad_passes:0.02055 _recursive_joint_graph_passes:0.24552 _recursive_post_grad_passes:0.05829 async_compile.wait:0.71482 code_gen:7.21779 inductor_compile:8.39822 backend_compile:10.79642 gc:0.00016 entire_frame_compile:12.22553 total_wall_time:12.22553 2025-08-14T21:45:12.1691968Z STATS: call_* op count: 161 | FakeTensorMode.__torch_dispatch__:12745 | FakeTensor.__torch_dispatch__:2105 | ProxyTorchDispatchMode.__torch_dispatch__:2842 2025-08-14T21:45:12.1693998Z Dynamo produced 1 graphs covering 161 ops with 0 graph breaks (0 unique) 2025-08-14T21:45:17.6485711Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:45:17.6487676Z from pkg_resources import resource_filename 2025-08-14T21:45:18.2421351Z 2025-08-14T21:45:20.3809941Z loading model: 0it [00:00, ?it/s]`loss_type=None` was set in the config but it is unrecognised.Using the default loss: `ForCausalLMLoss`. 2025-08-14T21:45:20.3811454Z WARNING:transformers.modeling_utils:`loss_type=None` was set in the config but it is unrecognised.Using the default loss: `ForCausalLMLoss`. 2025-08-14T21:45:20.4066345Z 2025-08-14T21:45:20.4071570Z loading model: 0it [00:02, ?it/s] 2025-08-14T21:45:20.4072431Z cpu eval DistillGPT2 2025-08-14T21:45:20.8406482Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:45:21.0242616Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:45:21.2112268Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:45:28.4597745Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4598100Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4598338Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4598572Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4598808Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4599036Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4599257Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4599484Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4599706Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4599921Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4600145Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4600415Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:28.4600975Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-08-14T21:45:28.4601448Z transformer_outputs = self.transformer( 2025-08-14T21:45:28.4601984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:45:28.4602426Z outputs = block( 2025-08-14T21:45:28.4602804Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:28.4603231Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:28.4603662Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:28.4604082Z return func(*args, **kwargs) 2025-08-14T21:45:28.4604520Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:45:28.4604972Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:45:28.4605412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:28.4605841Z return func(*args, **kwargs) 2025-08-14T21:45:28.4606263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:45:28.4606717Z attn_output, attn_weights = attention_interface( 2025-08-14T21:45:28.4607227Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:45:28.4607764Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:45:28.4607990Z 2025-08-14T21:45:28.4608110Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:28.4608591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-08-14T21:45:28.4609037Z transformer_outputs = self.transformer( 2025-08-14T21:45:28.4609862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:45:28.4610284Z outputs = block( 2025-08-14T21:45:28.4610709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:28.4611161Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:28.4611586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:28.4612002Z return func(*args, **kwargs) 2025-08-14T21:45:28.4612422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:45:28.4612866Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:45:28.4613302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:28.4613718Z return func(*args, **kwargs) 2025-08-14T21:45:28.4614131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:45:28.4614591Z attn_output, attn_weights = attention_interface( 2025-08-14T21:45:28.4615077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:45:28.4615593Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:45:28.4615773Z 2025-08-14T21:45:28.4615864Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4616098Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4616357Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:28.4616809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-08-14T21:45:28.4617258Z transformer_outputs = self.transformer( 2025-08-14T21:45:28.4617708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:45:28.4618119Z outputs = block( 2025-08-14T21:45:28.4618476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:28.4618885Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:28.4619305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:28.4619888Z return func(*args, **kwargs) 2025-08-14T21:45:28.4620369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:45:28.4620943Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:45:28.4621411Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:45:28.4621839Z hidden_states = self.act(hidden_states) 2025-08-14T21:45:28.4622233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:45:28.4622781Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:45:28.4623041Z 2025-08-14T21:45:28.4623140Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4623397Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4623625Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4623841Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4624067Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4624290Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4624548Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:28.4624997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-08-14T21:45:28.4625433Z transformer_outputs = self.transformer( 2025-08-14T21:45:28.4625925Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:45:28.4626349Z outputs = block( 2025-08-14T21:45:28.4626710Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:28.4627124Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:28.4627535Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:28.4627933Z return func(*args, **kwargs) 2025-08-14T21:45:28.4628336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:45:28.4628873Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:45:28.4629296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:28.4629702Z return func(*args, **kwargs) 2025-08-14T21:45:28.4630108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:45:28.4630564Z attn_output, attn_weights = attention_interface( 2025-08-14T21:45:28.4631035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:45:28.4631546Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:45:28.4631741Z 2025-08-14T21:45:28.4631862Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:28.4632303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-08-14T21:45:28.4632714Z transformer_outputs = self.transformer( 2025-08-14T21:45:28.4633126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:45:28.4633528Z outputs = block( 2025-08-14T21:45:28.4633859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:28.4634247Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:28.4634650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:28.4635042Z return func(*args, **kwargs) 2025-08-14T21:45:28.4635426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:45:28.4635850Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:45:28.4636260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:28.4636648Z return func(*args, **kwargs) 2025-08-14T21:45:28.4637044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:45:28.4637473Z attn_output, attn_weights = attention_interface( 2025-08-14T21:45:28.4637943Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:45:28.4638429Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:45:28.4638611Z 2025-08-14T21:45:28.4638696Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4638922Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4639173Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:28.4639609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-08-14T21:45:28.4640028Z transformer_outputs = self.transformer( 2025-08-14T21:45:28.4640442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:45:28.4640902Z outputs = block( 2025-08-14T21:45:28.4641252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:28.4641660Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:28.4642362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:28.4642762Z return func(*args, **kwargs) 2025-08-14T21:45:28.4643153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:45:28.4643597Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:45:28.4644033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:45:28.4644454Z hidden_states = self.act(hidden_states) 2025-08-14T21:45:28.4644830Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:45:28.4645308Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:45:28.4645554Z 2025-08-14T21:45:28.4645640Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4645868Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4646097Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4646310Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4646527Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4646741Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4646988Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:28.4647420Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-08-14T21:45:28.4647840Z transformer_outputs = self.transformer( 2025-08-14T21:45:28.4648254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:45:28.4648638Z outputs = block( 2025-08-14T21:45:28.4648986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:28.4649368Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:28.4649768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:28.4650153Z return func(*args, **kwargs) 2025-08-14T21:45:28.4650545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:45:28.4650975Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:45:28.4651393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:28.4651785Z return func(*args, **kwargs) 2025-08-14T21:45:28.4652184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:45:28.4652618Z attn_output, attn_weights = attention_interface( 2025-08-14T21:45:28.4653075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:45:28.4653584Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:45:28.4653785Z 2025-08-14T21:45:28.4653896Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:28.4654334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-08-14T21:45:28.4654748Z transformer_outputs = self.transformer( 2025-08-14T21:45:28.4655167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:45:28.4655578Z outputs = block( 2025-08-14T21:45:28.4656025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:28.4656473Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:28.4656883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:28.4657315Z return func(*args, **kwargs) 2025-08-14T21:45:28.4657706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:45:28.4658171Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:45:28.4658595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:28.4658997Z return func(*args, **kwargs) 2025-08-14T21:45:28.4659392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:45:28.4659925Z attn_output, attn_weights = attention_interface( 2025-08-14T21:45:28.4660422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:45:28.4660925Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:45:28.4661112Z 2025-08-14T21:45:28.4661199Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4661431Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4661688Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:28.4662126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-08-14T21:45:28.4662554Z transformer_outputs = self.transformer( 2025-08-14T21:45:28.4662969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:45:28.4663370Z outputs = block( 2025-08-14T21:45:28.4663717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:28.4664100Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:28.4664503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:28.4664894Z return func(*args, **kwargs) 2025-08-14T21:45:28.4665290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:45:28.4665740Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:45:28.4666176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:45:28.4666591Z hidden_states = self.act(hidden_states) 2025-08-14T21:45:28.4666963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:45:28.4667448Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:45:28.4667695Z 2025-08-14T21:45:28.4667780Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4668006Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4668229Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4668448Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4668657Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4668871Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4669118Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:28.4669552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-08-14T21:45:28.4669977Z transformer_outputs = self.transformer( 2025-08-14T21:45:28.4670392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:45:28.4670852Z outputs = block( 2025-08-14T21:45:28.4671191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:28.4671608Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:28.4672040Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:28.4672433Z return func(*args, **kwargs) 2025-08-14T21:45:28.4672834Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:45:28.4673257Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:45:28.4673653Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:28.4674026Z return func(*args, **kwargs) 2025-08-14T21:45:28.4674404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:45:28.4674818Z attn_output, attn_weights = attention_interface( 2025-08-14T21:45:28.4675263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:45:28.4675754Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:45:28.4675948Z 2025-08-14T21:45:28.4676057Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:28.4676483Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-08-14T21:45:28.4676883Z transformer_outputs = self.transformer( 2025-08-14T21:45:28.4677282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:45:28.4677659Z outputs = block( 2025-08-14T21:45:28.4677994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:28.4678358Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:28.4678748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:28.4679126Z return func(*args, **kwargs) 2025-08-14T21:45:28.4679498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:45:28.4679900Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:45:28.4680300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:28.4680679Z return func(*args, **kwargs) 2025-08-14T21:45:28.4681045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:45:28.4681455Z attn_output, attn_weights = attention_interface( 2025-08-14T21:45:28.4681909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:45:28.4682373Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:45:28.4682540Z 2025-08-14T21:45:28.4682625Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4682842Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4683083Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:28.4683497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-08-14T21:45:28.4683897Z transformer_outputs = self.transformer( 2025-08-14T21:45:28.4684296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:45:28.4684676Z outputs = block( 2025-08-14T21:45:28.4685046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:28.4685412Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:28.4685813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:28.4686201Z return func(*args, **kwargs) 2025-08-14T21:45:28.4686570Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:45:28.4686983Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:45:28.4687389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:45:28.4687776Z hidden_states = self.act(hidden_states) 2025-08-14T21:45:28.4688128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:45:28.4688585Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:45:28.4688818Z 2025-08-14T21:45:28.4688910Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4689118Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4689328Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4689539Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4689742Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4689949Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4690188Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:28.4690602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-08-14T21:45:28.4691012Z transformer_outputs = self.transformer( 2025-08-14T21:45:28.4691395Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:45:28.4691761Z outputs = block( 2025-08-14T21:45:28.4692076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:28.4692435Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:28.4692815Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:28.4693182Z return func(*args, **kwargs) 2025-08-14T21:45:28.4693556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:45:28.4693955Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:45:28.4694345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:28.4694737Z return func(*args, **kwargs) 2025-08-14T21:45:28.4695108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:45:28.4695517Z attn_output, attn_weights = attention_interface( 2025-08-14T21:45:28.4695961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:45:28.4696433Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:45:28.4696622Z 2025-08-14T21:45:28.4696726Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:28.4697143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-08-14T21:45:28.4697535Z transformer_outputs = self.transformer( 2025-08-14T21:45:28.4697928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:45:28.4698299Z outputs = block( 2025-08-14T21:45:28.4698619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:28.4699024Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:28.4699444Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:28.4699973Z return func(*args, **kwargs) 2025-08-14T21:45:28.4700402Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:45:28.4700849Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:45:28.4701287Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:28.4701692Z return func(*args, **kwargs) 2025-08-14T21:45:28.4702089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:45:28.4702521Z attn_output, attn_weights = attention_interface( 2025-08-14T21:45:28.4702992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:45:28.4703471Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:45:28.4703641Z 2025-08-14T21:45:28.4703726Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4703953Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4704202Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:28.4704632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-08-14T21:45:28.4705049Z transformer_outputs = self.transformer( 2025-08-14T21:45:28.4705460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:45:28.4705862Z outputs = block( 2025-08-14T21:45:28.4706191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:28.4706576Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:28.4706977Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:28.4707374Z return func(*args, **kwargs) 2025-08-14T21:45:28.4707765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:45:28.4708204Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:45:28.4708631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:45:28.4709044Z hidden_states = self.act(hidden_states) 2025-08-14T21:45:28.4709418Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:45:28.4709892Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:45:28.4710137Z 2025-08-14T21:45:28.4710231Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4710448Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4710667Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4710890Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4711099Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4711317Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4711569Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:28.4712001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-08-14T21:45:28.4712422Z transformer_outputs = self.transformer( 2025-08-14T21:45:28.4712835Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:45:28.4713240Z outputs = block( 2025-08-14T21:45:28.4713655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:28.4714063Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:28.4714465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:28.4714880Z return func(*args, **kwargs) 2025-08-14T21:45:28.4715259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:45:28.4715679Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:45:28.4716086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:28.4716471Z return func(*args, **kwargs) 2025-08-14T21:45:28.4716862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:45:28.4717297Z attn_output, attn_weights = attention_interface( 2025-08-14T21:45:28.4717770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:45:28.4718270Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:45:28.4718472Z 2025-08-14T21:45:28.4718587Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:28.4718995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-08-14T21:45:28.4719378Z transformer_outputs = self.transformer( 2025-08-14T21:45:28.4719743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:45:28.4720101Z outputs = block( 2025-08-14T21:45:28.4720412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:28.4720755Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:28.4721122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:28.4721480Z return func(*args, **kwargs) 2025-08-14T21:45:28.4721836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:45:28.4722213Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:45:28.4722590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:28.4722949Z return func(*args, **kwargs) 2025-08-14T21:45:28.4723296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:45:28.4723692Z attn_output, attn_weights = attention_interface( 2025-08-14T21:45:28.4724122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:45:28.4724571Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:45:28.4724730Z 2025-08-14T21:45:28.4724809Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4725017Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4725250Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:28.4725645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-08-14T21:45:28.4726033Z transformer_outputs = self.transformer( 2025-08-14T21:45:28.4726409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:45:28.4726771Z outputs = block( 2025-08-14T21:45:28.4727077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:28.4727428Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:28.4728606Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:45:28.4729015Z return func(*args, **kwargs) 2025-08-14T21:45:28.4729368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:45:28.4729790Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:45:28.4730190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:45:28.4730570Z hidden_states = self.act(hidden_states) 2025-08-14T21:45:28.4730915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:45:28.4731355Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:45:28.4731584Z 2025-08-14T21:45:28.4731674Z cudagraph partition due to non gpu ops 2025-08-14T21:45:28.4731908Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:28.4732320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1207, in forward 2025-08-14T21:45:28.4732745Z logits = self.lm_head(hidden_states[:, slice_indices, :]) 2025-08-14T21:45:28.4732916Z 2025-08-14T21:45:36.4707909Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:36.4708585Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/loss/loss_utils.py", line 67, in ForCausalLMLoss 2025-08-14T21:45:36.4709186Z loss = fixed_cross_entropy(logits, shift_labels, num_items_in_batch, ignore_index, **kwargs) 2025-08-14T21:45:36.4709745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/loss/loss_utils.py", line 36, in fixed_cross_entropy 2025-08-14T21:45:36.4714964Z loss = nn.functional.cross_entropy(source, target, ignore_index=ignore_index, reduction=reduction) 2025-08-14T21:45:36.4719245Z 2025-08-14T21:45:37.6110868Z Compilation time (from dynamo_timed): 15.006934325 2025-08-14T21:45:37.6265048Z pass 2025-08-14T21:45:37.6265506Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:45:37.6266390Z TIMING: gc:0.00364 entire_frame_compile:15.00693 _recursive_pre_grad_passes:0.02681 _recursive_joint_graph_passes:0.23645 _recursive_post_grad_passes:0.05638 async_compile.wait:1.41355 code_gen:8.56119 inductor_compile:10.10144 backend_compile:11.94898 total_wall_time:15.00693 2025-08-14T21:45:37.6267385Z STATS: call_* op count: 299 | FakeTensorMode.__torch_dispatch__:12355 | FakeTensor.__torch_dispatch__:2126 | ProxyTorchDispatchMode.__torch_dispatch__:2254 2025-08-14T21:45:37.6267916Z Dynamo produced 2 graphs covering 299 ops with 2 graph breaks (1 unique) 2025-08-14T21:45:43.1736367Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:45:43.1737403Z from pkg_resources import resource_filename 2025-08-14T21:45:43.8109104Z 2025-08-14T21:45:43.8124546Z loading model: 0it [00:00, ?it/s]If you want to use `ElectraForCausalLM` as a standalone, add `is_decoder=True.` 2025-08-14T21:45:43.8125390Z WARNING:transformers.models.electra.modeling_electra:If you want to use `ElectraForCausalLM` as a standalone, add `is_decoder=True.` 2025-08-14T21:45:44.2721471Z 2025-08-14T21:45:44.2722300Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:45:44.2734941Z cpu eval ElectraForCausalLM 2025-08-14T21:45:44.4918788Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:45:44.6031272Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:45:44.6887595Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:45:54.9480836Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9482323Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:54.9482812Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:45:54.9486875Z return mod(**inputs) 2025-08-14T21:45:54.9492073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1543, in forward 2025-08-14T21:45:54.9495603Z outputs = self.electra( 2025-08-14T21:45:54.9496283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 797, in forward 2025-08-14T21:45:54.9496922Z hidden_states = self.embeddings_project(hidden_states) 2025-08-14T21:45:54.9497210Z 2025-08-14T21:45:54.9497534Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9497809Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9498184Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9499043Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9499406Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9499743Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9499990Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9500299Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9500531Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9500767Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9500998Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:54.9501372Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:45:54.9501708Z return mod(**inputs) 2025-08-14T21:45:54.9502119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1543, in forward 2025-08-14T21:45:54.9502517Z outputs = self.electra( 2025-08-14T21:45:54.9502901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:45:54.9503296Z hidden_states = self.encoder( 2025-08-14T21:45:54.9503674Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:45:54.9504061Z layer_outputs = layer_module( 2025-08-14T21:45:54.9504407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:54.9504773Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:54.9505165Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:45:54.9505568Z layer_output = apply_chunking_to_forward( 2025-08-14T21:45:54.9505971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:45:54.9506359Z return forward_fn(*input_tensors) 2025-08-14T21:45:54.9506769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:45:54.9507239Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:45:54.9507687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:45:54.9508130Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:45:54.9508551Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:45:54.9508893Z return self.act(input) 2025-08-14T21:45:54.9509006Z 2025-08-14T21:45:54.9509092Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9509524Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9509742Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9510039Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9510237Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9510466Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9510666Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9510860Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9511063Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9511263Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9511457Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9511688Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:54.9512049Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:45:54.9512374Z return mod(**inputs) 2025-08-14T21:45:54.9512744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1543, in forward 2025-08-14T21:45:54.9513133Z outputs = self.electra( 2025-08-14T21:45:54.9513506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:45:54.9513887Z hidden_states = self.encoder( 2025-08-14T21:45:54.9514268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:45:54.9514655Z layer_outputs = layer_module( 2025-08-14T21:45:54.9514992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:54.9515338Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:54.9515729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:45:54.9516125Z layer_output = apply_chunking_to_forward( 2025-08-14T21:45:54.9516517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:45:54.9516892Z return forward_fn(*input_tensors) 2025-08-14T21:45:54.9517305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:45:54.9517767Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:45:54.9518188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:45:54.9518604Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:45:54.9518973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:45:54.9519305Z return self.act(input) 2025-08-14T21:45:54.9519415Z 2025-08-14T21:45:54.9519493Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9519704Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9519914Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9520104Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9520298Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9520496Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9520691Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9520881Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9521075Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9521269Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9521457Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9521682Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:54.9522037Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:45:54.9522341Z return mod(**inputs) 2025-08-14T21:45:54.9522748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1543, in forward 2025-08-14T21:45:54.9523128Z outputs = self.electra( 2025-08-14T21:45:54.9523512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:45:54.9523904Z hidden_states = self.encoder( 2025-08-14T21:45:54.9524276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:45:54.9524652Z layer_outputs = layer_module( 2025-08-14T21:45:54.9524978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:54.9525329Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:54.9525714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:45:54.9526103Z layer_output = apply_chunking_to_forward( 2025-08-14T21:45:54.9526495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:45:54.9526876Z return forward_fn(*input_tensors) 2025-08-14T21:45:54.9527282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:45:54.9527736Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:45:54.9528148Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:45:54.9528561Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:45:54.9528925Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:45:54.9529243Z return self.act(input) 2025-08-14T21:45:54.9529356Z 2025-08-14T21:45:54.9529432Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9529637Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9529836Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9530032Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9530231Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9530433Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9530621Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9530814Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9531006Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9531195Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9531393Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9531617Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:54.9531961Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:45:54.9532273Z return mod(**inputs) 2025-08-14T21:45:54.9532635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1543, in forward 2025-08-14T21:45:54.9533017Z outputs = self.electra( 2025-08-14T21:45:54.9533371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:45:54.9533753Z hidden_states = self.encoder( 2025-08-14T21:45:54.9534125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:45:54.9534503Z layer_outputs = layer_module( 2025-08-14T21:45:54.9534831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:54.9535176Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:54.9535555Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:45:54.9535938Z layer_output = apply_chunking_to_forward( 2025-08-14T21:45:54.9536355Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:45:54.9536751Z return forward_fn(*input_tensors) 2025-08-14T21:45:54.9537160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:45:54.9537657Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:45:54.9538080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:45:54.9538494Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:45:54.9538859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:45:54.9539180Z return self.act(input) 2025-08-14T21:45:54.9539301Z 2025-08-14T21:45:54.9539386Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9539817Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9540042Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9540272Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9540493Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9540717Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9540925Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9541133Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9541345Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9541559Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9542005Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9542278Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:54.9542670Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:45:54.9543025Z return mod(**inputs) 2025-08-14T21:45:54.9543440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1543, in forward 2025-08-14T21:45:54.9543861Z outputs = self.electra( 2025-08-14T21:45:54.9544267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:45:54.9544695Z hidden_states = self.encoder( 2025-08-14T21:45:54.9545115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:45:54.9545532Z layer_outputs = layer_module( 2025-08-14T21:45:54.9545909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:54.9546306Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:54.9546735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:45:54.9547173Z layer_output = apply_chunking_to_forward( 2025-08-14T21:45:54.9547603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:45:54.9548007Z return forward_fn(*input_tensors) 2025-08-14T21:45:54.9548404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:45:54.9548851Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:45:54.9549264Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:45:54.9549676Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:45:54.9550033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:45:54.9550360Z return self.act(input) 2025-08-14T21:45:54.9550468Z 2025-08-14T21:45:54.9550551Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9550820Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9551060Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9551271Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9551485Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9551712Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9551916Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9552114Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9552306Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9552510Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9552710Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9552928Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:54.9553277Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:45:54.9553593Z return mod(**inputs) 2025-08-14T21:45:54.9553958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1543, in forward 2025-08-14T21:45:54.9554330Z outputs = self.electra( 2025-08-14T21:45:54.9554689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:45:54.9555067Z hidden_states = self.encoder( 2025-08-14T21:45:54.9555428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:45:54.9555812Z layer_outputs = layer_module( 2025-08-14T21:45:54.9556143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:54.9556492Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:54.9556865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:45:54.9557256Z layer_output = apply_chunking_to_forward( 2025-08-14T21:45:54.9557641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:45:54.9558008Z return forward_fn(*input_tensors) 2025-08-14T21:45:54.9558408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:45:54.9558863Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:45:54.9559278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:45:54.9559744Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:45:54.9560109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:45:54.9560435Z return self.act(input) 2025-08-14T21:45:54.9560541Z 2025-08-14T21:45:54.9560627Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9560821Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9561024Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9561220Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9561410Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9561613Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9561817Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9562006Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9562202Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9562402Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9562600Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9562818Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:54.9563167Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:45:54.9563486Z return mod(**inputs) 2025-08-14T21:45:54.9563883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1543, in forward 2025-08-14T21:45:54.9564281Z outputs = self.electra( 2025-08-14T21:45:54.9564647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:45:54.9565039Z hidden_states = self.encoder( 2025-08-14T21:45:54.9565409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:45:54.9565776Z layer_outputs = layer_module( 2025-08-14T21:45:54.9566102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:54.9566438Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:54.9566821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:45:54.9567214Z layer_output = apply_chunking_to_forward( 2025-08-14T21:45:54.9567597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:45:54.9567985Z return forward_fn(*input_tensors) 2025-08-14T21:45:54.9568389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:45:54.9568818Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:45:54.9569214Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:45:54.9569610Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:45:54.9569964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:45:54.9570275Z return self.act(input) 2025-08-14T21:45:54.9570381Z 2025-08-14T21:45:54.9570460Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9570661Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9570860Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9571050Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9571247Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9571443Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9571638Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9571828Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9572023Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9572221Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9572413Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9572636Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:54.9572978Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:45:54.9573285Z return mod(**inputs) 2025-08-14T21:45:54.9573702Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1543, in forward 2025-08-14T21:45:54.9574096Z outputs = self.electra( 2025-08-14T21:45:54.9574462Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:45:54.9574849Z hidden_states = self.encoder( 2025-08-14T21:45:54.9575225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:45:54.9575604Z layer_outputs = layer_module( 2025-08-14T21:45:54.9575946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:54.9576304Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:54.9576685Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:45:54.9577133Z layer_output = apply_chunking_to_forward( 2025-08-14T21:45:54.9577576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:45:54.9577981Z return forward_fn(*input_tensors) 2025-08-14T21:45:54.9578417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:45:54.9578889Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:45:54.9579336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:45:54.9579872Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:45:54.9580286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:45:54.9580658Z return self.act(input) 2025-08-14T21:45:54.9580780Z 2025-08-14T21:45:54.9580883Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9581104Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9581326Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9581530Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9581748Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9581940Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9582136Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9582348Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9582564Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9582783Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9583001Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9583243Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:54.9583634Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:45:54.9584000Z return mod(**inputs) 2025-08-14T21:45:54.9584419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1543, in forward 2025-08-14T21:45:54.9584850Z outputs = self.electra( 2025-08-14T21:45:54.9585250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:45:54.9585675Z hidden_states = self.encoder( 2025-08-14T21:45:54.9586090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:45:54.9586514Z layer_outputs = layer_module( 2025-08-14T21:45:54.9586904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:54.9587305Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:54.9587737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:45:54.9588186Z layer_output = apply_chunking_to_forward( 2025-08-14T21:45:54.9588620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:45:54.9589038Z return forward_fn(*input_tensors) 2025-08-14T21:45:54.9589494Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:45:54.9590003Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:45:54.9590437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:45:54.9590841Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:45:54.9591214Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:45:54.9591540Z return self.act(input) 2025-08-14T21:45:54.9591642Z 2025-08-14T21:45:54.9591760Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9591957Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9592190Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9592389Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9592594Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9592789Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9592987Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9593184Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9593386Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9593589Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9593780Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9594007Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:54.9594356Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:45:54.9594667Z return mod(**inputs) 2025-08-14T21:45:54.9595024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1543, in forward 2025-08-14T21:45:54.9595401Z outputs = self.electra( 2025-08-14T21:45:54.9595759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:45:54.9596127Z hidden_states = self.encoder( 2025-08-14T21:45:54.9596496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:45:54.9596871Z layer_outputs = layer_module( 2025-08-14T21:45:54.9597203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:54.9597541Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:54.9597926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:45:54.9598317Z layer_output = apply_chunking_to_forward( 2025-08-14T21:45:54.9598700Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:45:54.9599068Z return forward_fn(*input_tensors) 2025-08-14T21:45:54.9599473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:45:54.9599929Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:45:54.9600343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:45:54.9600753Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:45:54.9601117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:45:54.9601441Z return self.act(input) 2025-08-14T21:45:54.9601546Z 2025-08-14T21:45:54.9601624Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9601827Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9602028Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9602219Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9602421Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9602619Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9602814Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9603016Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9603208Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9603397Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9603582Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9603801Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:54.9604141Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:45:54.9604437Z return mod(**inputs) 2025-08-14T21:45:54.9604825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1543, in forward 2025-08-14T21:45:54.9605213Z outputs = self.electra( 2025-08-14T21:45:54.9605564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:45:54.9605939Z hidden_states = self.encoder( 2025-08-14T21:45:54.9606300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:45:54.9606670Z layer_outputs = layer_module( 2025-08-14T21:45:54.9606995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:54.9607346Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:54.9607726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:45:54.9608115Z layer_output = apply_chunking_to_forward( 2025-08-14T21:45:54.9608499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:45:54.9608867Z return forward_fn(*input_tensors) 2025-08-14T21:45:54.9609262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:45:54.9609696Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:45:54.9610097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:45:54.9610508Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:45:54.9610874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:45:54.9611190Z return self.act(input) 2025-08-14T21:45:54.9611302Z 2025-08-14T21:45:54.9611382Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9611582Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9611779Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9611970Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9612169Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9612367Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9612555Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9612751Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9612947Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9613136Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9613331Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9613556Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:54.9613898Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:45:54.9614207Z return mod(**inputs) 2025-08-14T21:45:54.9614571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1543, in forward 2025-08-14T21:45:54.9614952Z outputs = self.electra( 2025-08-14T21:45:54.9615305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:45:54.9615687Z hidden_states = self.encoder( 2025-08-14T21:45:54.9616055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:45:54.9616434Z layer_outputs = layer_module( 2025-08-14T21:45:54.9616759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:45:54.9617107Z return super().__call__(*args, **kwargs) 2025-08-14T21:45:54.9617490Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:45:54.9617899Z layer_output = apply_chunking_to_forward( 2025-08-14T21:45:54.9618299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:45:54.9618687Z return forward_fn(*input_tensors) 2025-08-14T21:45:54.9619085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:45:54.9619596Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:45:54.9620027Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:45:54.9620447Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:45:54.9620812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:45:54.9621144Z return self.act(input) 2025-08-14T21:45:54.9621271Z 2025-08-14T21:45:54.9621351Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9621573Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9621769Z cudagraph partition due to non gpu ops 2025-08-14T21:45:54.9622000Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:45:54.9622354Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:45:54.9622723Z return mod(**inputs) 2025-08-14T21:45:54.9623077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1564, in forward 2025-08-14T21:45:54.9623457Z lm_loss = self.loss_function( 2025-08-14T21:45:54.9623810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/loss/loss_utils.py", line 67, in ForCausalLMLoss 2025-08-14T21:45:54.9624264Z loss = fixed_cross_entropy(logits, shift_labels, num_items_in_batch, ignore_index, **kwargs) 2025-08-14T21:45:54.9624731Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/loss/loss_utils.py", line 36, in fixed_cross_entropy 2025-08-14T21:45:54.9625214Z loss = nn.functional.cross_entropy(source, target, ignore_index=ignore_index, reduction=reduction) 2025-08-14T21:45:54.9625454Z 2025-08-14T21:46:03.5594776Z Compilation time (from dynamo_timed): 17.644389658 2025-08-14T21:46:03.5700174Z pass 2025-08-14T21:46:03.5700626Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:46:03.5701463Z TIMING: _recursive_pre_grad_passes:0.03792 _recursive_joint_graph_passes:0.45818 _recursive_post_grad_passes:0.07568 async_compile.wait:0.78409 code_gen:8.46414 inductor_compile:10.12307 backend_compile:14.79633 gc:0.00051 entire_frame_compile:17.64439 total_wall_time:17.64439 2025-08-14T21:46:03.5702456Z STATS: call_* op count: 377 | FakeTensorMode.__torch_dispatch__:26896 | FakeTensor.__torch_dispatch__:3851 | ProxyTorchDispatchMode.__torch_dispatch__:6491 2025-08-14T21:46:03.5703063Z Dynamo produced 1 graphs covering 377 ops with 0 graph breaks (0 unique) 2025-08-14T21:46:09.1981540Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:46:09.1986401Z from pkg_resources import resource_filename 2025-08-14T21:46:09.8241274Z 2025-08-14T21:46:10.2023854Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:46:10.2027177Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:46:10.2035953Z cpu eval ElectraForQuestionAnswering 2025-08-14T21:46:10.3163239Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:46:10.3781328Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:46:10.4368580Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:46:20.5829681Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5830328Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:20.5834577Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:20.5835035Z return mod(**inputs) 2025-08-14T21:46:20.5835560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1317, in forward 2025-08-14T21:46:20.5836086Z discriminator_hidden_states = self.electra( 2025-08-14T21:46:20.5836563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 797, in forward 2025-08-14T21:46:20.5837061Z hidden_states = self.embeddings_project(hidden_states) 2025-08-14T21:46:20.5837254Z 2025-08-14T21:46:20.5837353Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5837609Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5837882Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5838119Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5838331Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5838554Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5838790Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5839008Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5839222Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5839442Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5839701Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:20.5840089Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:20.5840442Z return mod(**inputs) 2025-08-14T21:46:20.5840849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1317, in forward 2025-08-14T21:46:20.5841351Z discriminator_hidden_states = self.electra( 2025-08-14T21:46:20.5841993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:46:20.5842457Z hidden_states = self.encoder( 2025-08-14T21:46:20.5842898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:46:20.5843340Z layer_outputs = layer_module( 2025-08-14T21:46:20.5843739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:20.5844153Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:20.5844620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:46:20.5845076Z layer_output = apply_chunking_to_forward( 2025-08-14T21:46:20.5845529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:46:20.5846005Z return forward_fn(*input_tensors) 2025-08-14T21:46:20.5846486Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:46:20.5847021Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:46:20.5847551Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:46:20.5848052Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:46:20.5848483Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:46:20.5848894Z return self.act(input) 2025-08-14T21:46:20.5849023Z 2025-08-14T21:46:20.5849108Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5849567Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5849789Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5850066Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5850286Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5850539Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5850761Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5850982Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5851198Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5851422Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5851655Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5851903Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:20.5852294Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:20.5852646Z return mod(**inputs) 2025-08-14T21:46:20.5853056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1317, in forward 2025-08-14T21:46:20.5853494Z discriminator_hidden_states = self.electra( 2025-08-14T21:46:20.5853943Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:46:20.5854398Z hidden_states = self.encoder( 2025-08-14T21:46:20.5854821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:46:20.5855242Z layer_outputs = layer_module( 2025-08-14T21:46:20.5855616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:20.5856014Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:20.5856440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:46:20.5856885Z layer_output = apply_chunking_to_forward( 2025-08-14T21:46:20.5857319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:46:20.5857738Z return forward_fn(*input_tensors) 2025-08-14T21:46:20.5858181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:46:20.5858693Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:46:20.5859171Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:46:20.5859911Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:46:20.5860322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:46:20.5860692Z return self.act(input) 2025-08-14T21:46:20.5860813Z 2025-08-14T21:46:20.5860907Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5861134Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5861360Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5861586Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5861799Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5862022Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5862243Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5862465Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5862674Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5862890Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5863106Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5863348Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:20.5863739Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:20.5864089Z return mod(**inputs) 2025-08-14T21:46:20.5864551Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1317, in forward 2025-08-14T21:46:20.5864994Z discriminator_hidden_states = self.electra( 2025-08-14T21:46:20.5865453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:46:20.5865896Z hidden_states = self.encoder( 2025-08-14T21:46:20.5866303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:46:20.5866731Z layer_outputs = layer_module( 2025-08-14T21:46:20.5867103Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:20.5867491Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:20.5867905Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:46:20.5868342Z layer_output = apply_chunking_to_forward( 2025-08-14T21:46:20.5868765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:46:20.5869173Z return forward_fn(*input_tensors) 2025-08-14T21:46:20.5869599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:46:20.5870068Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:46:20.5870504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:46:20.5870931Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:46:20.5871311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:46:20.5871650Z return self.act(input) 2025-08-14T21:46:20.5871760Z 2025-08-14T21:46:20.5871849Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5872053Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5872273Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5872489Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5872699Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5872919Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5873138Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5873348Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5873566Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5873785Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5873999Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5874238Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:20.5874603Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:20.5874934Z return mod(**inputs) 2025-08-14T21:46:20.5875308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1317, in forward 2025-08-14T21:46:20.5875728Z discriminator_hidden_states = self.electra( 2025-08-14T21:46:20.5876136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:46:20.5876524Z hidden_states = self.encoder( 2025-08-14T21:46:20.5876912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:46:20.5877373Z layer_outputs = layer_module( 2025-08-14T21:46:20.5877720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:20.5878078Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:20.5878477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:46:20.5878928Z layer_output = apply_chunking_to_forward( 2025-08-14T21:46:20.5879335Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:46:20.5879792Z return forward_fn(*input_tensors) 2025-08-14T21:46:20.5880229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:46:20.5880703Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:46:20.5881131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:46:20.5881562Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:46:20.5881945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:46:20.5882287Z return self.act(input) 2025-08-14T21:46:20.5882399Z 2025-08-14T21:46:20.5882482Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5882694Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5882905Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5883108Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5883318Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5883523Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5883726Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5883925Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5884128Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5884333Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5884529Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5884761Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:20.5885121Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:20.5885440Z return mod(**inputs) 2025-08-14T21:46:20.5885817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1317, in forward 2025-08-14T21:46:20.5886236Z discriminator_hidden_states = self.electra( 2025-08-14T21:46:20.5886642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:46:20.5887028Z hidden_states = self.encoder( 2025-08-14T21:46:20.5887414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:46:20.5887806Z layer_outputs = layer_module( 2025-08-14T21:46:20.5888145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:20.5888548Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:20.5888947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:46:20.5889352Z layer_output = apply_chunking_to_forward( 2025-08-14T21:46:20.5889746Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:46:20.5890138Z return forward_fn(*input_tensors) 2025-08-14T21:46:20.5890557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:46:20.5891016Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:46:20.5891451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:46:20.5891878Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:46:20.5892261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:46:20.5892592Z return self.act(input) 2025-08-14T21:46:20.5892709Z 2025-08-14T21:46:20.5892829Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5893065Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5893267Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5893476Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5893705Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5893912Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5894110Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5894321Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5894525Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5894724Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5894928Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5895160Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:20.5895516Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:20.5895843Z return mod(**inputs) 2025-08-14T21:46:20.5896222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1317, in forward 2025-08-14T21:46:20.5896657Z discriminator_hidden_states = self.electra( 2025-08-14T21:46:20.5897058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:46:20.5897449Z hidden_states = self.encoder( 2025-08-14T21:46:20.5897832Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:46:20.5898216Z layer_outputs = layer_module( 2025-08-14T21:46:20.5898561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:20.5898926Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:20.5899332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:46:20.5899973Z layer_output = apply_chunking_to_forward( 2025-08-14T21:46:20.5900431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:46:20.5900869Z return forward_fn(*input_tensors) 2025-08-14T21:46:20.5901330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:46:20.5901833Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:46:20.5902273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:46:20.5902710Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:46:20.5903100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:46:20.5903485Z return self.act(input) 2025-08-14T21:46:20.5903616Z 2025-08-14T21:46:20.5903703Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5903935Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5904155Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5904382Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5904608Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5904824Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5905046Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5905273Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5905492Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5905717Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5905943Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5906195Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:20.5906596Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:20.5907026Z return mod(**inputs) 2025-08-14T21:46:20.5907441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1317, in forward 2025-08-14T21:46:20.5907911Z discriminator_hidden_states = self.electra( 2025-08-14T21:46:20.5908402Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:46:20.5908841Z hidden_states = self.encoder( 2025-08-14T21:46:20.5909259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:46:20.5909678Z layer_outputs = layer_module( 2025-08-14T21:46:20.5910156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:20.5910606Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:20.5911028Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:46:20.5911465Z layer_output = apply_chunking_to_forward( 2025-08-14T21:46:20.5911895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:46:20.5912316Z return forward_fn(*input_tensors) 2025-08-14T21:46:20.5912756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:46:20.5913253Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:46:20.5913721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:46:20.5914182Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:46:20.5914575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:46:20.5914943Z return self.act(input) 2025-08-14T21:46:20.5915059Z 2025-08-14T21:46:20.5915152Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5915374Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5915597Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5915821Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5916031Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5916255Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5916475Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5916693Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5916903Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5917122Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5917341Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5917585Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:20.5917972Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:20.5918326Z return mod(**inputs) 2025-08-14T21:46:20.5918723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1317, in forward 2025-08-14T21:46:20.5919170Z discriminator_hidden_states = self.electra( 2025-08-14T21:46:20.5919605Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:46:20.5920021Z hidden_states = self.encoder( 2025-08-14T21:46:20.5920419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:46:20.5920835Z layer_outputs = layer_module( 2025-08-14T21:46:20.5921202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:20.5921584Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:20.5922054Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:46:20.5922516Z layer_output = apply_chunking_to_forward( 2025-08-14T21:46:20.5922939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:46:20.5923368Z return forward_fn(*input_tensors) 2025-08-14T21:46:20.5923814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:46:20.5924315Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:46:20.5924777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:46:20.5925226Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:46:20.5925633Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:46:20.5925991Z return self.act(input) 2025-08-14T21:46:20.5926107Z 2025-08-14T21:46:20.5926201Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5926416Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5926637Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5926862Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5927060Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5927264Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5927469Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5927665Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5927869Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5928079Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5928275Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5928508Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:20.5928875Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:20.5929205Z return mod(**inputs) 2025-08-14T21:46:20.5929576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1317, in forward 2025-08-14T21:46:20.5929992Z discriminator_hidden_states = self.electra( 2025-08-14T21:46:20.5930399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:46:20.5930782Z hidden_states = self.encoder( 2025-08-14T21:46:20.5931166Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:46:20.5931553Z layer_outputs = layer_module( 2025-08-14T21:46:20.5931897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:20.5932252Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:20.5932659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:46:20.5933071Z layer_output = apply_chunking_to_forward( 2025-08-14T21:46:20.5933471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:46:20.5933857Z return forward_fn(*input_tensors) 2025-08-14T21:46:20.5934282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:46:20.5934752Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:46:20.5935183Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:46:20.5935616Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:46:20.5936079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:46:20.5936426Z return self.act(input) 2025-08-14T21:46:20.5936589Z 2025-08-14T21:46:20.5936668Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5936883Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5937116Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5937316Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5937525Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5937735Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5937933Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5938137Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5938341Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5938547Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5938747Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5938980Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:20.5939350Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:20.5939779Z return mod(**inputs) 2025-08-14T21:46:20.5940179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1317, in forward 2025-08-14T21:46:20.5940631Z discriminator_hidden_states = self.electra( 2025-08-14T21:46:20.5941065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:46:20.5941481Z hidden_states = self.encoder( 2025-08-14T21:46:20.5942096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:46:20.5942504Z layer_outputs = layer_module( 2025-08-14T21:46:20.5942850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:20.5943220Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:20.5943629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:46:20.5944045Z layer_output = apply_chunking_to_forward( 2025-08-14T21:46:20.5944442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:46:20.5944846Z return forward_fn(*input_tensors) 2025-08-14T21:46:20.5945276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:46:20.5945753Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:46:20.5946187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:46:20.5946630Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:46:20.5947017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:46:20.5947355Z return self.act(input) 2025-08-14T21:46:20.5947475Z 2025-08-14T21:46:20.5947560Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5947769Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5948134Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5948354Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5948574Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5948798Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5949012Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5949233Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5949464Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5949667Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5949879Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5950119Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:20.5950622Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:20.5951006Z return mod(**inputs) 2025-08-14T21:46:20.5951402Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1317, in forward 2025-08-14T21:46:20.5951862Z discriminator_hidden_states = self.electra( 2025-08-14T21:46:20.5952278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:46:20.5952700Z hidden_states = self.encoder( 2025-08-14T21:46:20.5953092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:46:20.5953489Z layer_outputs = layer_module( 2025-08-14T21:46:20.5953838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:20.5954212Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:20.5954620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:46:20.5955026Z layer_output = apply_chunking_to_forward( 2025-08-14T21:46:20.5955434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:46:20.5955829Z return forward_fn(*input_tensors) 2025-08-14T21:46:20.5956259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:46:20.5956728Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:46:20.5957174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:46:20.5957617Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:46:20.5958001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:46:20.5958341Z return self.act(input) 2025-08-14T21:46:20.5958460Z 2025-08-14T21:46:20.5958540Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5958758Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5958961Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5959171Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5959378Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5959578Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5959787Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5959994Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5960203Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5960402Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5960609Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5960844Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:20.5961205Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:20.5961543Z return mod(**inputs) 2025-08-14T21:46:20.5961914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1317, in forward 2025-08-14T21:46:20.5962314Z discriminator_hidden_states = self.electra( 2025-08-14T21:46:20.5962717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 799, in forward 2025-08-14T21:46:20.5963104Z hidden_states = self.encoder( 2025-08-14T21:46:20.5963480Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 566, in forward 2025-08-14T21:46:20.5963859Z layer_outputs = layer_module( 2025-08-14T21:46:20.5964200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:20.5964615Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:20.5965008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 504, in forward 2025-08-14T21:46:20.5965426Z layer_output = apply_chunking_to_forward( 2025-08-14T21:46:20.5965848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:46:20.5966228Z return forward_fn(*input_tensors) 2025-08-14T21:46:20.5966635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 512, in feed_forward_chunk 2025-08-14T21:46:20.5967097Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:46:20.5967535Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 428, in forward 2025-08-14T21:46:20.5967956Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:46:20.5968322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:46:20.5968652Z return self.act(input) 2025-08-14T21:46:20.5968759Z 2025-08-14T21:46:20.5968843Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5969049Z cudagraph partition due to non gpu ops 2025-08-14T21:46:20.5969271Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:20.5969633Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:20.5969950Z return mod(**inputs) 2025-08-14T21:46:20.5970305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1348, in forward 2025-08-14T21:46:20.5970719Z start_loss = loss_fct(start_logits, start_positions) 2025-08-14T21:46:20.5970871Z 2025-08-14T21:46:20.5970980Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:20.5971331Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:20.5971637Z return mod(**inputs) 2025-08-14T21:46:20.5971999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1349, in forward 2025-08-14T21:46:20.5972410Z end_loss = loss_fct(end_logits, end_positions) 2025-08-14T21:46:20.5972551Z 2025-08-14T21:46:28.0837407Z Compilation time (from dynamo_timed): 16.528573474 2025-08-14T21:46:28.0837738Z pass 2025-08-14T21:46:28.0838031Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:46:28.0838822Z TIMING: _recursive_pre_grad_passes:0.0389 _recursive_joint_graph_passes:0.4633 _recursive_post_grad_passes:0.08145 async_compile.wait:0.00285 code_gen:7.07012 inductor_compile:9.06768 backend_compile:13.68256 gc:0.00043 entire_frame_compile:16.52857 total_wall_time:16.52857 2025-08-14T21:46:28.0839780Z STATS: call_* op count: 378 | FakeTensorMode.__torch_dispatch__:26743 | FakeTensor.__torch_dispatch__:3868 | ProxyTorchDispatchMode.__torch_dispatch__:6518 2025-08-14T21:46:28.0840298Z Dynamo produced 1 graphs covering 378 ops with 0 graph breaks (0 unique) 2025-08-14T21:46:33.6290064Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:46:33.6291102Z from pkg_resources import resource_filename 2025-08-14T21:46:34.3621109Z 2025-08-14T21:46:35.9369300Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:46:35.9374181Z loading model: 0it [00:01, ?it/s] 2025-08-14T21:46:35.9436250Z cpu eval GPT2ForSequenceClassification 2025-08-14T21:46:36.7197028Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:46:37.0852893Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:46:37.4687478Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:46:45.7273496Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7274324Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7274552Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7274836Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7275107Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7275339Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7275562Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7275799Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7276031Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7276251Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7276478Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7276739Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7277000Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:45.7277473Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:45.7277849Z return mod(**inputs) 2025-08-14T21:46:45.7278284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1509, in forward 2025-08-14T21:46:45.7278739Z last_non_pad_token = (token_indices * non_pad_mask).argmax(-1) 2025-08-14T21:46:45.7278916Z 2025-08-14T21:46:45.7279004Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7279208Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7279440Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7279647Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7279845Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7280090Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:45.7280506Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:45.7280867Z return mod(**inputs) 2025-08-14T21:46:45.7281263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:46:45.7281703Z transformer_outputs = self.transformer( 2025-08-14T21:46:45.7282138Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:46:45.7282539Z outputs = block( 2025-08-14T21:46:45.7282898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:45.7283303Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:45.7283757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7284181Z return func(*args, **kwargs) 2025-08-14T21:46:45.7284589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:46:45.7285027Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:46:45.7285448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7285859Z return func(*args, **kwargs) 2025-08-14T21:46:45.7286264Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:46:45.7286715Z attn_output, attn_weights = attention_interface( 2025-08-14T21:46:45.7287182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:46:45.7287704Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:46:45.7287921Z 2025-08-14T21:46:45.7288395Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:45.7288802Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:45.7289231Z return mod(**inputs) 2025-08-14T21:46:45.7289619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:46:45.7290103Z transformer_outputs = self.transformer( 2025-08-14T21:46:45.7290497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:46:45.7290878Z outputs = block( 2025-08-14T21:46:45.7291209Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:45.7291600Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:45.7292002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7292405Z return func(*args, **kwargs) 2025-08-14T21:46:45.7292805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:46:45.7293228Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:46:45.7293648Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7294050Z return func(*args, **kwargs) 2025-08-14T21:46:45.7294444Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:46:45.7294878Z attn_output, attn_weights = attention_interface( 2025-08-14T21:46:45.7296660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:46:45.7297386Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:46:45.7297582Z 2025-08-14T21:46:45.7297739Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7297976Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7298262Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:45.7298672Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:45.7299058Z return mod(**inputs) 2025-08-14T21:46:45.7299474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:46:45.7300093Z transformer_outputs = self.transformer( 2025-08-14T21:46:45.7300537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:46:45.7300968Z outputs = block( 2025-08-14T21:46:45.7301321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:45.7301720Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:45.7302140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7302554Z return func(*args, **kwargs) 2025-08-14T21:46:45.7302959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:46:45.7303405Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:46:45.7303879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:46:45.7304384Z hidden_states = self.act(hidden_states) 2025-08-14T21:46:45.7304795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:46:45.7305276Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:46:45.7305532Z 2025-08-14T21:46:45.7305867Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7306111Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7306507Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7306766Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7307045Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7307265Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7307511Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:45.7307910Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:45.7308266Z return mod(**inputs) 2025-08-14T21:46:45.7308652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:46:45.7309135Z transformer_outputs = self.transformer( 2025-08-14T21:46:45.7309568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:46:45.7309980Z outputs = block( 2025-08-14T21:46:45.7310352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:45.7310750Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:45.7311159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7311552Z return func(*args, **kwargs) 2025-08-14T21:46:45.7311958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:46:45.7312387Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:46:45.7312803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7313188Z return func(*args, **kwargs) 2025-08-14T21:46:45.7313586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:46:45.7314027Z attn_output, attn_weights = attention_interface( 2025-08-14T21:46:45.7314507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:46:45.7315020Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:46:45.7315226Z 2025-08-14T21:46:45.7315339Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:45.7330792Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:45.7331184Z return mod(**inputs) 2025-08-14T21:46:45.7331612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:46:45.7332066Z transformer_outputs = self.transformer( 2025-08-14T21:46:45.7332525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:46:45.7332951Z outputs = block( 2025-08-14T21:46:45.7333313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:45.7333728Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:45.7334159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7334577Z return func(*args, **kwargs) 2025-08-14T21:46:45.7334991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:46:45.7335433Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:46:45.7335866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7336274Z return func(*args, **kwargs) 2025-08-14T21:46:45.7336819Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:46:45.7337320Z attn_output, attn_weights = attention_interface( 2025-08-14T21:46:45.7337817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:46:45.7338356Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:46:45.7338550Z 2025-08-14T21:46:45.7338646Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7338891Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7339152Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:45.7339685Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:45.7340067Z return mod(**inputs) 2025-08-14T21:46:45.7340469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:46:45.7340915Z transformer_outputs = self.transformer( 2025-08-14T21:46:45.7341335Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:46:45.7342007Z outputs = block( 2025-08-14T21:46:45.7342384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:45.7342786Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:45.7343210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7343624Z return func(*args, **kwargs) 2025-08-14T21:46:45.7344029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:46:45.7344488Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:46:45.7344968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:46:45.7345397Z hidden_states = self.act(hidden_states) 2025-08-14T21:46:45.7345777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:46:45.7346280Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:46:45.7346531Z 2025-08-14T21:46:45.7346631Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7346861Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7347092Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7347324Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7347550Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7347769Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7348030Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:45.7348435Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:45.7348786Z return mod(**inputs) 2025-08-14T21:46:45.7349189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:46:45.7349629Z transformer_outputs = self.transformer( 2025-08-14T21:46:45.7350053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:46:45.7350418Z outputs = block( 2025-08-14T21:46:45.7350734Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:45.7351094Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:45.7351458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7351830Z return func(*args, **kwargs) 2025-08-14T21:46:45.7352366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:46:45.7352768Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:46:45.7353185Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7353593Z return func(*args, **kwargs) 2025-08-14T21:46:45.7353967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:46:45.7354369Z attn_output, attn_weights = attention_interface( 2025-08-14T21:46:45.7354833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:46:45.7355357Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:46:45.7355556Z 2025-08-14T21:46:45.7355680Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:45.7356067Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:45.7356422Z return mod(**inputs) 2025-08-14T21:46:45.7356816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:46:45.7357279Z transformer_outputs = self.transformer( 2025-08-14T21:46:45.7357669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:46:45.7358045Z outputs = block( 2025-08-14T21:46:45.7358372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:45.7358746Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:45.7359160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7359533Z return func(*args, **kwargs) 2025-08-14T21:46:45.7359919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:46:45.7360310Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:46:45.7360698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7361076Z return func(*args, **kwargs) 2025-08-14T21:46:45.7361441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:46:45.7361849Z attn_output, attn_weights = attention_interface( 2025-08-14T21:46:45.7362294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:46:45.7362754Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:46:45.7362918Z 2025-08-14T21:46:45.7363000Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7363359Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7363601Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:45.7363972Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:45.7364296Z return mod(**inputs) 2025-08-14T21:46:45.7364668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:46:45.7365094Z transformer_outputs = self.transformer( 2025-08-14T21:46:45.7365504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:46:45.7365950Z outputs = block( 2025-08-14T21:46:45.7366281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:45.7366657Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:45.7367144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7368455Z return func(*args, **kwargs) 2025-08-14T21:46:45.7368860Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:46:45.7369316Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:46:45.7369757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:46:45.7370175Z hidden_states = self.act(hidden_states) 2025-08-14T21:46:45.7370551Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:46:45.7371022Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:46:45.7371277Z 2025-08-14T21:46:45.7371363Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7371599Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7371820Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7372048Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7372274Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7372498Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7372766Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:45.7373151Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:45.7373499Z return mod(**inputs) 2025-08-14T21:46:45.7373877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:46:45.7374305Z transformer_outputs = self.transformer( 2025-08-14T21:46:45.7374718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:46:45.7375116Z outputs = block( 2025-08-14T21:46:45.7375463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:45.7375851Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:45.7376252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7376640Z return func(*args, **kwargs) 2025-08-14T21:46:45.7377032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:46:45.7377451Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:46:45.7377862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7378248Z return func(*args, **kwargs) 2025-08-14T21:46:45.7378642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:46:45.7379074Z attn_output, attn_weights = attention_interface( 2025-08-14T21:46:45.7379660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:46:45.7380232Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:46:45.7380447Z 2025-08-14T21:46:45.7380564Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:45.7380967Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:45.7381319Z return mod(**inputs) 2025-08-14T21:46:45.7381719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:46:45.7382143Z transformer_outputs = self.transformer( 2025-08-14T21:46:45.7382564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:46:45.7383052Z outputs = block( 2025-08-14T21:46:45.7383409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:45.7383823Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:45.7384242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7384638Z return func(*args, **kwargs) 2025-08-14T21:46:45.7385011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:46:45.7385416Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:46:45.7385795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7386166Z return func(*args, **kwargs) 2025-08-14T21:46:45.7386545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:46:45.7386943Z attn_output, attn_weights = attention_interface( 2025-08-14T21:46:45.7387393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:46:45.7387854Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:46:45.7388016Z 2025-08-14T21:46:45.7388107Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7388315Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7388560Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:45.7388928Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:45.7389261Z return mod(**inputs) 2025-08-14T21:46:45.7389618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:46:45.7390018Z transformer_outputs = self.transformer( 2025-08-14T21:46:45.7390420Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:46:45.7390792Z outputs = block( 2025-08-14T21:46:45.7391118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:45.7391487Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:45.7391873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7392244Z return func(*args, **kwargs) 2025-08-14T21:46:45.7392618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:46:45.7393040Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:46:45.7393453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:46:45.7393839Z hidden_states = self.act(hidden_states) 2025-08-14T21:46:45.7394197Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:46:45.7394656Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:46:45.7394891Z 2025-08-14T21:46:45.7394973Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7395192Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7395406Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7395617Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7395814Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7396022Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7396261Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:45.7396622Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:45.7397041Z return mod(**inputs) 2025-08-14T21:46:45.7397413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:46:45.7397884Z transformer_outputs = self.transformer( 2025-08-14T21:46:45.7398290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:46:45.7398668Z outputs = block( 2025-08-14T21:46:45.7398994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:45.7399351Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:45.7399744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7400115Z return func(*args, **kwargs) 2025-08-14T21:46:45.7400481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:46:45.7400872Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:46:45.7401267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7401649Z return func(*args, **kwargs) 2025-08-14T21:46:45.7402016Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:46:45.7402426Z attn_output, attn_weights = attention_interface( 2025-08-14T21:46:45.7402874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:46:45.7403342Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:46:45.7403521Z 2025-08-14T21:46:45.7403624Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:45.7403982Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:45.7404303Z return mod(**inputs) 2025-08-14T21:46:45.7404648Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:46:45.7405036Z transformer_outputs = self.transformer( 2025-08-14T21:46:45.7405414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:46:45.7405774Z outputs = block( 2025-08-14T21:46:45.7406081Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:45.7406443Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:45.7406821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7407194Z return func(*args, **kwargs) 2025-08-14T21:46:45.7407561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:46:45.7407964Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:46:45.7408342Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7408699Z return func(*args, **kwargs) 2025-08-14T21:46:45.7409071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:46:45.7409477Z attn_output, attn_weights = attention_interface( 2025-08-14T21:46:45.7409927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:46:45.7410365Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:46:45.7410531Z 2025-08-14T21:46:45.7410609Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7410819Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7411097Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:45.7411454Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:45.7411794Z return mod(**inputs) 2025-08-14T21:46:45.7412225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:46:45.7412617Z transformer_outputs = self.transformer( 2025-08-14T21:46:45.7412998Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:46:45.7413363Z outputs = block( 2025-08-14T21:46:45.7413671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:45.7414031Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:45.7414410Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7414781Z return func(*args, **kwargs) 2025-08-14T21:46:45.7415145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:46:45.7415556Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:46:45.7415955Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:46:45.7416335Z hidden_states = self.act(hidden_states) 2025-08-14T21:46:45.7416671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:46:45.7417108Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:46:45.7417339Z 2025-08-14T21:46:45.7417427Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7417635Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7417850Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7418056Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7418263Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7418463Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7418701Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:45.7419067Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:45.7419389Z return mod(**inputs) 2025-08-14T21:46:45.7419896Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:46:45.7420338Z transformer_outputs = self.transformer( 2025-08-14T21:46:45.7420772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:46:45.7421172Z outputs = block( 2025-08-14T21:46:45.7421531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:45.7421926Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:45.7422299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7422681Z return func(*args, **kwargs) 2025-08-14T21:46:45.7423059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:46:45.7423463Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:46:45.7423844Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7424221Z return func(*args, **kwargs) 2025-08-14T21:46:45.7424589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:46:45.7424991Z attn_output, attn_weights = attention_interface( 2025-08-14T21:46:45.7425503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:46:45.7426017Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:46:45.7426228Z 2025-08-14T21:46:45.7426343Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:45.7426704Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:45.7427035Z return mod(**inputs) 2025-08-14T21:46:45.7427408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:46:45.7427813Z transformer_outputs = self.transformer( 2025-08-14T21:46:45.7428204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:46:45.7428585Z outputs = block( 2025-08-14T21:46:45.7428913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:45.7429276Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:45.7429660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7430038Z return func(*args, **kwargs) 2025-08-14T21:46:45.7430413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:46:45.7430805Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:46:45.7431195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7431569Z return func(*args, **kwargs) 2025-08-14T21:46:45.7431934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:46:45.7432347Z attn_output, attn_weights = attention_interface( 2025-08-14T21:46:45.7432796Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:46:45.7433261Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:46:45.7433426Z 2025-08-14T21:46:45.7433508Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7433719Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7433957Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:45.7434374Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:45.7434818Z return mod(**inputs) 2025-08-14T21:46:45.7435236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:46:45.7435626Z transformer_outputs = self.transformer( 2025-08-14T21:46:45.7436014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:46:45.7436379Z outputs = block( 2025-08-14T21:46:45.7436699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:45.7437054Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:45.7437438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7437813Z return func(*args, **kwargs) 2025-08-14T21:46:45.7438188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:46:45.7438592Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:46:45.7439006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:46:45.7439391Z hidden_states = self.act(hidden_states) 2025-08-14T21:46:45.7439818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:46:45.7440279Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:46:45.7440559Z 2025-08-14T21:46:45.7440644Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7440872Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7441090Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7441300Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7441508Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7441715Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7442097Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:45.7442472Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:45.7442801Z return mod(**inputs) 2025-08-14T21:46:45.7443185Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:46:45.7443619Z transformer_outputs = self.transformer( 2025-08-14T21:46:45.7444017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:46:45.7444414Z outputs = block( 2025-08-14T21:46:45.7444758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:45.7445126Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:45.7445510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7445880Z return func(*args, **kwargs) 2025-08-14T21:46:45.7446254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:46:45.7446654Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:46:45.7447041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7447407Z return func(*args, **kwargs) 2025-08-14T21:46:45.7447778Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:46:45.7448183Z attn_output, attn_weights = attention_interface( 2025-08-14T21:46:45.7448621Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:46:45.7449100Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:46:45.7449292Z 2025-08-14T21:46:45.7449398Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:45.7449760Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:45.7450083Z return mod(**inputs) 2025-08-14T21:46:45.7450457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:46:45.7450856Z transformer_outputs = self.transformer( 2025-08-14T21:46:45.7451253Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:46:45.7451620Z outputs = block( 2025-08-14T21:46:45.7451950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:45.7452318Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:45.7452697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7453079Z return func(*args, **kwargs) 2025-08-14T21:46:45.7453450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:46:45.7453985Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:46:45.7454386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7454809Z return func(*args, **kwargs) 2025-08-14T21:46:45.7455209Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:46:45.7455619Z attn_output, attn_weights = attention_interface( 2025-08-14T21:46:45.7456068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:46:45.7456527Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:46:45.7456718Z 2025-08-14T21:46:45.7456803Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7457020Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7457258Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:45.7457619Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:45.7457963Z return mod(**inputs) 2025-08-14T21:46:45.7458327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:46:45.7458721Z transformer_outputs = self.transformer( 2025-08-14T21:46:45.7459101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:46:45.7459472Z outputs = block( 2025-08-14T21:46:45.7459903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:45.7460295Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:45.7460698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7461073Z return func(*args, **kwargs) 2025-08-14T21:46:45.7461454Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:46:45.7461868Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:46:45.7462289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:46:45.7462691Z hidden_states = self.act(hidden_states) 2025-08-14T21:46:45.7463052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:46:45.7463505Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:46:45.7463750Z 2025-08-14T21:46:45.7463830Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7464047Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7464249Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7464459Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7464666Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7464869Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7465124Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:45.7465489Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:45.7465821Z return mod(**inputs) 2025-08-14T21:46:45.7466192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:46:45.7466586Z transformer_outputs = self.transformer( 2025-08-14T21:46:45.7466981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:46:45.7467356Z outputs = block( 2025-08-14T21:46:45.7467679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:45.7468101Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:45.7468489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7468879Z return func(*args, **kwargs) 2025-08-14T21:46:45.7469262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:46:45.7469649Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:46:45.7470024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7470390Z return func(*args, **kwargs) 2025-08-14T21:46:45.7470743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:46:45.7471143Z attn_output, attn_weights = attention_interface( 2025-08-14T21:46:45.7471581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:46:45.7472047Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:46:45.7472224Z 2025-08-14T21:46:45.7472327Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:45.7472681Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:45.7473000Z return mod(**inputs) 2025-08-14T21:46:45.7473346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:46:45.7473729Z transformer_outputs = self.transformer( 2025-08-14T21:46:45.7474106Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:46:45.7474459Z outputs = block( 2025-08-14T21:46:45.7474765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:45.7475120Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:45.7475494Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7475856Z return func(*args, **kwargs) 2025-08-14T21:46:45.7476210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:46:45.7476599Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:46:45.7476976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7477333Z return func(*args, **kwargs) 2025-08-14T21:46:45.7477696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:46:45.7478091Z attn_output, attn_weights = attention_interface( 2025-08-14T21:46:45.7478527Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:46:45.7478964Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:46:45.7479132Z 2025-08-14T21:46:45.7479211Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7479420Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7479646Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:45.7479996Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:45.7480312Z return mod(**inputs) 2025-08-14T21:46:45.7480668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:46:45.7481046Z transformer_outputs = self.transformer( 2025-08-14T21:46:45.7481476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:46:45.7481843Z outputs = block( 2025-08-14T21:46:45.7482152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:45.7482570Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:45.7482969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7483329Z return func(*args, **kwargs) 2025-08-14T21:46:45.7483678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:46:45.7484076Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:46:45.7484473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:46:45.7484855Z hidden_states = self.act(hidden_states) 2025-08-14T21:46:45.7485191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:46:45.7485633Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:46:45.7485857Z 2025-08-14T21:46:45.7485946Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7486147Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7486350Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7486551Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7486751Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7486953Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7487175Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:45.7487524Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:45.7487831Z return mod(**inputs) 2025-08-14T21:46:45.7488187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:46:45.7488573Z transformer_outputs = self.transformer( 2025-08-14T21:46:45.7488946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:46:45.7489309Z outputs = block( 2025-08-14T21:46:45.7489618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:45.7489967Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:45.7490325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7490685Z return func(*args, **kwargs) 2025-08-14T21:46:45.7491042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:46:45.7491429Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:46:45.7491802Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7492161Z return func(*args, **kwargs) 2025-08-14T21:46:45.7492519Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:46:45.7492905Z attn_output, attn_weights = attention_interface( 2025-08-14T21:46:45.7493334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:46:45.7493803Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:46:45.7493979Z 2025-08-14T21:46:45.7494089Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:45.7494438Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:45.7494761Z return mod(**inputs) 2025-08-14T21:46:45.7495160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:46:45.7495566Z transformer_outputs = self.transformer( 2025-08-14T21:46:45.7495938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:46:45.7496326Z outputs = block( 2025-08-14T21:46:45.7496642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:45.7496990Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:45.7497362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7497725Z return func(*args, **kwargs) 2025-08-14T21:46:45.7498100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:46:45.7498484Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:46:45.7498860Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7499224Z return func(*args, **kwargs) 2025-08-14T21:46:45.7499720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:46:45.7500168Z attn_output, attn_weights = attention_interface( 2025-08-14T21:46:45.7500643Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:46:45.7501102Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:46:45.7501276Z 2025-08-14T21:46:45.7501355Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7501566Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7501797Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:45.7502138Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:45.7502474Z return mod(**inputs) 2025-08-14T21:46:45.7502861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:46:45.7503279Z transformer_outputs = self.transformer( 2025-08-14T21:46:45.7503688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:46:45.7504081Z outputs = block( 2025-08-14T21:46:45.7504419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:45.7504793Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:45.7505192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7505587Z return func(*args, **kwargs) 2025-08-14T21:46:45.7505983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:46:45.7506409Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:46:45.7506838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:46:45.7507253Z hidden_states = self.act(hidden_states) 2025-08-14T21:46:45.7507627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:46:45.7508096Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:46:45.7508350Z 2025-08-14T21:46:45.7508437Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7508664Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7508858Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7509055Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7509299Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7509498Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7509754Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:45.7510117Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:45.7510450Z return mod(**inputs) 2025-08-14T21:46:45.7510788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:46:45.7511169Z transformer_outputs = self.transformer( 2025-08-14T21:46:45.7511546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:46:45.7511902Z outputs = block( 2025-08-14T21:46:45.7512205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:45.7512551Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:45.7512915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7513267Z return func(*args, **kwargs) 2025-08-14T21:46:45.7513620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:46:45.7514006Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:46:45.7514372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7514715Z return func(*args, **kwargs) 2025-08-14T21:46:45.7515065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:46:45.7515451Z attn_output, attn_weights = attention_interface( 2025-08-14T21:46:45.7515869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:46:45.7516333Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:46:45.7516516Z 2025-08-14T21:46:45.7516617Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:45.7516962Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:45.7517265Z return mod(**inputs) 2025-08-14T21:46:45.7517608Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:46:45.7517985Z transformer_outputs = self.transformer( 2025-08-14T21:46:45.7518363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:46:45.7518719Z outputs = block( 2025-08-14T21:46:45.7519030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:45.7519396Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:45.7519749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7520103Z return func(*args, **kwargs) 2025-08-14T21:46:45.7520454Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:46:45.7520832Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:46:45.7521199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7521561Z return func(*args, **kwargs) 2025-08-14T21:46:45.7521918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:46:45.7522306Z attn_output, attn_weights = attention_interface( 2025-08-14T21:46:45.7522783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:46:45.7523258Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:46:45.7523417Z 2025-08-14T21:46:45.7523501Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7523727Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7523965Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:45.7524319Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:45.7524628Z return mod(**inputs) 2025-08-14T21:46:45.7524980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:46:45.7525371Z transformer_outputs = self.transformer( 2025-08-14T21:46:45.7525758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:46:45.7526115Z outputs = block( 2025-08-14T21:46:45.7526433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:45.7526794Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:45.7527155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7527524Z return func(*args, **kwargs) 2025-08-14T21:46:45.7527977Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:46:45.7528512Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:46:45.7528914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:46:45.7529296Z hidden_states = self.act(hidden_states) 2025-08-14T21:46:45.7529641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:46:45.7530085Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:46:45.7530315Z 2025-08-14T21:46:45.7530396Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7530609Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7530812Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7531008Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7531214Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7531425Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7531647Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:45.7532004Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:45.7532325Z return mod(**inputs) 2025-08-14T21:46:45.7532677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:46:45.7533059Z transformer_outputs = self.transformer( 2025-08-14T21:46:45.7533442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:46:45.7533811Z outputs = block( 2025-08-14T21:46:45.7534134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:45.7534493Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:45.7534874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7535247Z return func(*args, **kwargs) 2025-08-14T21:46:45.7535611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:46:45.7536012Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:46:45.7536453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7536833Z return func(*args, **kwargs) 2025-08-14T21:46:45.7537232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:46:45.7537663Z attn_output, attn_weights = attention_interface( 2025-08-14T21:46:45.7538103Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:46:45.7538586Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:46:45.7538801Z 2025-08-14T21:46:45.7538912Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:45.7539307Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:45.7539764Z return mod(**inputs) 2025-08-14T21:46:45.7540154Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:46:45.7540595Z transformer_outputs = self.transformer( 2025-08-14T21:46:45.7541023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:46:45.7541427Z outputs = block( 2025-08-14T21:46:45.7541927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:45.7542307Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:45.7542695Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7543088Z return func(*args, **kwargs) 2025-08-14T21:46:45.7543485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:46:45.7543936Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:46:45.7544378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7544792Z return func(*args, **kwargs) 2025-08-14T21:46:45.7545203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:46:45.7545647Z attn_output, attn_weights = attention_interface( 2025-08-14T21:46:45.7546136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:46:45.7546626Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:46:45.7546809Z 2025-08-14T21:46:45.7546896Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7547124Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7547370Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:45.7547758Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:45.7548110Z return mod(**inputs) 2025-08-14T21:46:45.7548487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:46:45.7548912Z transformer_outputs = self.transformer( 2025-08-14T21:46:45.7549331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:46:45.7549733Z outputs = block( 2025-08-14T21:46:45.7550076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:45.7550460Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:45.7550869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7551268Z return func(*args, **kwargs) 2025-08-14T21:46:45.7551772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:46:45.7552222Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:46:45.7552704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:46:45.7553149Z hidden_states = self.act(hidden_states) 2025-08-14T21:46:45.7553524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:46:45.7554011Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:46:45.7554256Z 2025-08-14T21:46:45.7554349Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7554570Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7554793Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7555016Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7555227Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7555450Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7555701Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:45.7556092Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:45.7556434Z return mod(**inputs) 2025-08-14T21:46:45.7556816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:46:45.7557233Z transformer_outputs = self.transformer( 2025-08-14T21:46:45.7557639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:46:45.7558012Z outputs = block( 2025-08-14T21:46:45.7558337Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:45.7558700Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:45.7559075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7559447Z return func(*args, **kwargs) 2025-08-14T21:46:45.7559816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:46:45.7560210Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:46:45.7560594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7560965Z return func(*args, **kwargs) 2025-08-14T21:46:45.7561336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:46:45.7561739Z attn_output, attn_weights = attention_interface( 2025-08-14T21:46:45.7562184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:46:45.7562667Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:46:45.7562851Z 2025-08-14T21:46:45.7562968Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:45.7563326Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:45.7563659Z return mod(**inputs) 2025-08-14T21:46:45.7564025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:46:45.7564436Z transformer_outputs = self.transformer( 2025-08-14T21:46:45.7564851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:46:45.7565245Z outputs = block( 2025-08-14T21:46:45.7565592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:45.7565980Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:45.7566438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7566851Z return func(*args, **kwargs) 2025-08-14T21:46:45.7567238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:46:45.7567672Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:46:45.7568058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7568431Z return func(*args, **kwargs) 2025-08-14T21:46:45.7568793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:46:45.7569197Z attn_output, attn_weights = attention_interface( 2025-08-14T21:46:45.7569640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:46:45.7570093Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:46:45.7570267Z 2025-08-14T21:46:45.7570347Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7570562Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7570804Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:45.7571162Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:45.7571492Z return mod(**inputs) 2025-08-14T21:46:45.7571859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:46:45.7572249Z transformer_outputs = self.transformer( 2025-08-14T21:46:45.7572638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:46:45.7573011Z outputs = block( 2025-08-14T21:46:45.7573336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:46:45.7573697Z return super().__call__(*args, **kwargs) 2025-08-14T21:46:45.7574073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:46:45.7574451Z return func(*args, **kwargs) 2025-08-14T21:46:45.7574842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:46:45.7575272Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:46:45.7575681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:46:45.7576072Z hidden_states = self.act(hidden_states) 2025-08-14T21:46:45.7576418Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:46:45.7576879Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:46:45.7577129Z 2025-08-14T21:46:45.7577214Z cudagraph partition due to non gpu ops 2025-08-14T21:46:45.7577470Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:45.7577854Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:45.7578201Z return mod(**inputs) 2025-08-14T21:46:45.7578582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1494, in forward 2025-08-14T21:46:45.7578990Z logits = self.score(hidden_states) 2025-08-14T21:46:45.7579135Z 2025-08-14T21:46:45.7579246Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:45.7579779Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:45.7580170Z return mod(**inputs) 2025-08-14T21:46:45.7580601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1537, in forward 2025-08-14T21:46:45.7581113Z loss = loss_fct(pooled_logits.view(-1, self.num_labels), labels.view(-1)) 2025-08-14T21:46:45.7581333Z 2025-08-14T21:46:45.7581467Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:46:45.7581852Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:46:45.7582189Z return mod(**inputs) 2025-08-14T21:46:45.7582576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1537, in forward 2025-08-14T21:46:45.7583056Z loss = loss_fct(pooled_logits.view(-1, self.num_labels), labels.view(-1)) 2025-08-14T21:46:45.7583261Z 2025-08-14T21:47:02.0052476Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0058562Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0058918Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0059195Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0059425Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0059833Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0060076Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0060324Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0060550Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0060812Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0061051Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0061286Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0061556Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:02.0061999Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:02.0062373Z return mod(**inputs) 2025-08-14T21:47:02.0062807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1509, in forward 2025-08-14T21:47:02.0063344Z last_non_pad_token = (token_indices * non_pad_mask).argmax(-1) 2025-08-14T21:47:02.0063542Z 2025-08-14T21:47:02.0063632Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0063871Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0064097Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0064325Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0064546Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0064810Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:02.0065217Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:02.0065571Z return mod(**inputs) 2025-08-14T21:47:02.0065972Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:02.0066421Z transformer_outputs = self.transformer( 2025-08-14T21:47:02.0066863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:02.0067262Z outputs = block( 2025-08-14T21:47:02.0067625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:02.0068022Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:02.0068445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0068869Z return func(*args, **kwargs) 2025-08-14T21:47:02.0069270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:02.0069717Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:02.0070136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0070557Z return func(*args, **kwargs) 2025-08-14T21:47:02.0071399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:02.0071920Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:02.0072415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:47:02.0073018Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:47:02.0073218Z 2025-08-14T21:47:02.0073337Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:02.0073723Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:02.0074069Z return mod(**inputs) 2025-08-14T21:47:02.0074457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:02.0074881Z transformer_outputs = self.transformer( 2025-08-14T21:47:02.0075293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:02.0075673Z outputs = block( 2025-08-14T21:47:02.0076018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:02.0076454Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:02.0076840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0077224Z return func(*args, **kwargs) 2025-08-14T21:47:02.0077615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:02.0078015Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:02.0078410Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0078787Z return func(*args, **kwargs) 2025-08-14T21:47:02.0079154Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:02.0079577Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:02.0080027Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:47:02.0080490Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:47:02.0080662Z 2025-08-14T21:47:02.0080744Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0080965Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0081338Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:02.0081713Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:02.0082035Z return mod(**inputs) 2025-08-14T21:47:02.0082400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:02.0082799Z transformer_outputs = self.transformer( 2025-08-14T21:47:02.0083209Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:02.0083629Z outputs = block( 2025-08-14T21:47:02.0083976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:02.0084358Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:02.0084767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0085142Z return func(*args, **kwargs) 2025-08-14T21:47:02.0085517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:47:02.0085930Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:47:02.0086385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:47:02.0086805Z hidden_states = self.act(hidden_states) 2025-08-14T21:47:02.0087158Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:02.0087636Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:02.0087878Z 2025-08-14T21:47:02.0087960Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0088174Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0088391Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0088593Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0088801Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0089008Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0089238Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:02.0089608Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:02.0089944Z return mod(**inputs) 2025-08-14T21:47:02.0090305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:02.0090709Z transformer_outputs = self.transformer( 2025-08-14T21:47:02.0091103Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:02.0091485Z outputs = block( 2025-08-14T21:47:02.0091803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:02.0092168Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:02.0092554Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0092927Z return func(*args, **kwargs) 2025-08-14T21:47:02.0093309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:02.0093724Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:02.0094118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0094490Z return func(*args, **kwargs) 2025-08-14T21:47:02.0094865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:02.0095278Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:02.0095726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:47:02.0096213Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:47:02.0096404Z 2025-08-14T21:47:02.0096512Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:02.0096879Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:02.0097205Z return mod(**inputs) 2025-08-14T21:47:02.0097571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:02.0097977Z transformer_outputs = self.transformer( 2025-08-14T21:47:02.0098390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:02.0098787Z outputs = block( 2025-08-14T21:47:02.0099128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:02.0099666Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:02.0100079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0100553Z return func(*args, **kwargs) 2025-08-14T21:47:02.0100979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:02.0101526Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:02.0101953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0102366Z return func(*args, **kwargs) 2025-08-14T21:47:02.0102768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:02.0103206Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:02.0103671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:47:02.0104168Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:47:02.0104343Z 2025-08-14T21:47:02.0104440Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0104657Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0104912Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:02.0105304Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:02.0105656Z return mod(**inputs) 2025-08-14T21:47:02.0106032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:02.0106451Z transformer_outputs = self.transformer( 2025-08-14T21:47:02.0106868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:02.0107274Z outputs = block( 2025-08-14T21:47:02.0107617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:02.0108005Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:02.0108406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0108766Z return func(*args, **kwargs) 2025-08-14T21:47:02.0109129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:47:02.0109544Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:47:02.0109939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:47:02.0110326Z hidden_states = self.act(hidden_states) 2025-08-14T21:47:02.0110677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:02.0111124Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:02.0111356Z 2025-08-14T21:47:02.0111438Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0111652Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0111862Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0112065Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0112264Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0112466Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0112697Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:02.0113048Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:02.0113372Z return mod(**inputs) 2025-08-14T21:47:02.0113733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:02.0114120Z transformer_outputs = self.transformer( 2025-08-14T21:47:02.0114510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:02.0115773Z outputs = block( 2025-08-14T21:47:02.0116112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:02.0116515Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:02.0116913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0117281Z return func(*args, **kwargs) 2025-08-14T21:47:02.0117634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:02.0118037Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:02.0118434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0118796Z return func(*args, **kwargs) 2025-08-14T21:47:02.0119150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:02.0119551Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:02.0119988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:47:02.0120471Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:47:02.0120653Z 2025-08-14T21:47:02.0120758Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:02.0121114Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:02.0121442Z return mod(**inputs) 2025-08-14T21:47:02.0121787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:02.0122198Z transformer_outputs = self.transformer( 2025-08-14T21:47:02.0122676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:02.0123043Z outputs = block( 2025-08-14T21:47:02.0123358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:02.0123713Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:02.0124083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0124447Z return func(*args, **kwargs) 2025-08-14T21:47:02.0124805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:02.0125197Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:02.0125576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0125932Z return func(*args, **kwargs) 2025-08-14T21:47:02.0126299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:02.0126699Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:02.0127131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:47:02.0127574Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:47:02.0127742Z 2025-08-14T21:47:02.0127823Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0128040Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0128280Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:02.0128652Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:02.0128987Z return mod(**inputs) 2025-08-14T21:47:02.0129358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:02.0129801Z transformer_outputs = self.transformer( 2025-08-14T21:47:02.0130197Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:02.0130607Z outputs = block( 2025-08-14T21:47:02.0130948Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:02.0131342Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:02.0131721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0132103Z return func(*args, **kwargs) 2025-08-14T21:47:02.0132470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:47:02.0132886Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:47:02.0133302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:47:02.0133713Z hidden_states = self.act(hidden_states) 2025-08-14T21:47:02.0134083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:02.0134562Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:02.0134812Z 2025-08-14T21:47:02.0134907Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0135128Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0135352Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0135571Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0135792Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0136003Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0136257Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:02.0136646Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:02.0136993Z return mod(**inputs) 2025-08-14T21:47:02.0137380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:02.0137805Z transformer_outputs = self.transformer( 2025-08-14T21:47:02.0138218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:02.0138616Z outputs = block( 2025-08-14T21:47:02.0138961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:02.0139351Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:02.0139861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0140285Z return func(*args, **kwargs) 2025-08-14T21:47:02.0140707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:02.0141153Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:02.0141547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0142096Z return func(*args, **kwargs) 2025-08-14T21:47:02.0142478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:02.0142880Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:02.0143335Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:47:02.0143855Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:47:02.0144051Z 2025-08-14T21:47:02.0144176Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:02.0144662Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:02.0145054Z return mod(**inputs) 2025-08-14T21:47:02.0145446Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:02.0145899Z transformer_outputs = self.transformer( 2025-08-14T21:47:02.0146316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:02.0146716Z outputs = block( 2025-08-14T21:47:02.0147044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:02.0147436Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:02.0147836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0148239Z return func(*args, **kwargs) 2025-08-14T21:47:02.0148640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:02.0149054Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:02.0149469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0149864Z return func(*args, **kwargs) 2025-08-14T21:47:02.0150248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:02.0150693Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:02.0151163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:47:02.0151652Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:47:02.0151824Z 2025-08-14T21:47:02.0151911Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0152140Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0152391Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:02.0152772Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:02.0153124Z return mod(**inputs) 2025-08-14T21:47:02.0153506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:02.0153926Z transformer_outputs = self.transformer( 2025-08-14T21:47:02.0154332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:02.0154732Z outputs = block( 2025-08-14T21:47:02.0155079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:02.0155458Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:02.0155864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0156257Z return func(*args, **kwargs) 2025-08-14T21:47:02.0156646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:47:02.0157075Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:47:02.0157513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:47:02.0157937Z hidden_states = self.act(hidden_states) 2025-08-14T21:47:02.0158311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:02.0158760Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:02.0159007Z 2025-08-14T21:47:02.0159087Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0159348Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0159558Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0159792Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0160004Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0160234Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0160465Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:02.0160835Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:02.0161172Z return mod(**inputs) 2025-08-14T21:47:02.0161528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:02.0161935Z transformer_outputs = self.transformer( 2025-08-14T21:47:02.0162329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:02.0162710Z outputs = block( 2025-08-14T21:47:02.0163030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:02.0163395Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:02.0163779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0164151Z return func(*args, **kwargs) 2025-08-14T21:47:02.0164528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:02.0164933Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:02.0165324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0165694Z return func(*args, **kwargs) 2025-08-14T21:47:02.0166066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:02.0166479Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:02.0166915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:47:02.0167400Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:47:02.0167593Z 2025-08-14T21:47:02.0167698Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:02.0168064Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:02.0168387Z return mod(**inputs) 2025-08-14T21:47:02.0168750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:02.0169155Z transformer_outputs = self.transformer( 2025-08-14T21:47:02.0169549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:02.0169931Z outputs = block( 2025-08-14T21:47:02.0170257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:02.0170623Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:02.0170998Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0171368Z return func(*args, **kwargs) 2025-08-14T21:47:02.0171740Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:02.0172142Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:02.0172519Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0172891Z return func(*args, **kwargs) 2025-08-14T21:47:02.0173332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:02.0173743Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:02.0174205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:47:02.0174693Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:47:02.0174860Z 2025-08-14T21:47:02.0174952Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0175165Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0175415Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:02.0175801Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:02.0176154Z return mod(**inputs) 2025-08-14T21:47:02.0176562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:02.0177044Z transformer_outputs = self.transformer( 2025-08-14T21:47:02.0177470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:02.0177861Z outputs = block( 2025-08-14T21:47:02.0178206Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:02.0178592Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:02.0178992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0179392Z return func(*args, **kwargs) 2025-08-14T21:47:02.0179857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:47:02.0180319Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:47:02.0180765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:47:02.0181188Z hidden_states = self.act(hidden_states) 2025-08-14T21:47:02.0181564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:02.0182056Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:02.0182308Z 2025-08-14T21:47:02.0182397Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0182625Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0182859Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0183076Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0183295Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0183516Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0183762Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:02.0184147Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:02.0184499Z return mod(**inputs) 2025-08-14T21:47:02.0184888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:02.0185304Z transformer_outputs = self.transformer( 2025-08-14T21:47:02.0185726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:02.0186125Z outputs = block( 2025-08-14T21:47:02.0186459Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:02.0186838Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:02.0187209Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0187580Z return func(*args, **kwargs) 2025-08-14T21:47:02.0187991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:02.0188397Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:02.0188803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0189186Z return func(*args, **kwargs) 2025-08-14T21:47:02.0189541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:02.0189939Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:02.0190370Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:47:02.0190830Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:47:02.0191016Z 2025-08-14T21:47:02.0191118Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:02.0191472Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:02.0191788Z return mod(**inputs) 2025-08-14T21:47:02.0192135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:02.0192523Z transformer_outputs = self.transformer( 2025-08-14T21:47:02.0192910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:02.0193284Z outputs = block( 2025-08-14T21:47:02.0193597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:02.0193964Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:02.0194333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0194684Z return func(*args, **kwargs) 2025-08-14T21:47:02.0195045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:02.0195430Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:02.0195809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0196161Z return func(*args, **kwargs) 2025-08-14T21:47:02.0196518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:02.0196914Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:02.0197349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:47:02.0197803Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:47:02.0197969Z 2025-08-14T21:47:02.0198049Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0198259Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0198487Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:02.0198840Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:02.0199163Z return mod(**inputs) 2025-08-14T21:47:02.0199510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:02.0199905Z transformer_outputs = self.transformer( 2025-08-14T21:47:02.0200286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:02.0200649Z outputs = block( 2025-08-14T21:47:02.0200967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:02.0201320Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:02.0201730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0202099Z return func(*args, **kwargs) 2025-08-14T21:47:02.0202474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:47:02.0202897Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:47:02.0203293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:47:02.0203666Z hidden_states = self.act(hidden_states) 2025-08-14T21:47:02.0204011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:02.0204451Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:02.0204674Z 2025-08-14T21:47:02.0204760Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0204961Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0205170Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0205375Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0205570Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0205772Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0206010Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:02.0206369Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:02.0206680Z return mod(**inputs) 2025-08-14T21:47:02.0207037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:02.0207424Z transformer_outputs = self.transformer( 2025-08-14T21:47:02.0207793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:02.0208153Z outputs = block( 2025-08-14T21:47:02.0208473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:02.0208834Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:02.0209205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0209577Z return func(*args, **kwargs) 2025-08-14T21:47:02.0209946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:02.0210333Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:02.0210718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0211091Z return func(*args, **kwargs) 2025-08-14T21:47:02.0211467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:02.0211889Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:02.0212362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:47:02.0212874Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:47:02.0213069Z 2025-08-14T21:47:02.0213188Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:02.0213567Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:02.0213912Z return mod(**inputs) 2025-08-14T21:47:02.0214295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:02.0214706Z transformer_outputs = self.transformer( 2025-08-14T21:47:02.0215118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:02.0215511Z outputs = block( 2025-08-14T21:47:02.0215905Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:02.0216304Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:02.0216706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0217128Z return func(*args, **kwargs) 2025-08-14T21:47:02.0217513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:02.0217945Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:02.0218355Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0218755Z return func(*args, **kwargs) 2025-08-14T21:47:02.0219147Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:02.0219667Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:02.0220156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:47:02.0220640Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:47:02.0220823Z 2025-08-14T21:47:02.0220909Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0221139Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0221376Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:02.0221737Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:02.0222068Z return mod(**inputs) 2025-08-14T21:47:02.0222435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:02.0222826Z transformer_outputs = self.transformer( 2025-08-14T21:47:02.0223227Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:02.0223628Z outputs = block( 2025-08-14T21:47:02.0223970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:02.0224329Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:02.0224708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0225082Z return func(*args, **kwargs) 2025-08-14T21:47:02.0225453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:47:02.0225857Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:47:02.0226268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:47:02.0226658Z hidden_states = self.act(hidden_states) 2025-08-14T21:47:02.0226999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:02.0227453Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:02.0227695Z 2025-08-14T21:47:02.0227776Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0227995Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0228196Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0228404Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0228615Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0228816Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0229055Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:02.0229419Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:02.0229748Z return mod(**inputs) 2025-08-14T21:47:02.0230184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:02.0230611Z transformer_outputs = self.transformer( 2025-08-14T21:47:02.0231003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:02.0231398Z outputs = block( 2025-08-14T21:47:02.0231726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:02.0232098Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:02.0232481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0232852Z return func(*args, **kwargs) 2025-08-14T21:47:02.0233227Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:02.0233636Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:02.0234019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0234398Z return func(*args, **kwargs) 2025-08-14T21:47:02.0234772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:02.0235181Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:02.0235620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:47:02.0236103Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:47:02.0236286Z 2025-08-14T21:47:02.0236402Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:02.0236769Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:02.0237105Z return mod(**inputs) 2025-08-14T21:47:02.0237469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:02.0237871Z transformer_outputs = self.transformer( 2025-08-14T21:47:02.0238252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:02.0238629Z outputs = block( 2025-08-14T21:47:02.0238953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:02.0239317Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:02.0239687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0240064Z return func(*args, **kwargs) 2025-08-14T21:47:02.0240431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:02.0240833Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:02.0241224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0241596Z return func(*args, **kwargs) 2025-08-14T21:47:02.0242107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:02.0242519Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:02.0242967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:47:02.0243428Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:47:02.0243599Z 2025-08-14T21:47:02.0243689Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0243909Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0244143Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:02.0244599Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:02.0244951Z return mod(**inputs) 2025-08-14T21:47:02.0245316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:02.0245744Z transformer_outputs = self.transformer( 2025-08-14T21:47:02.0246128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:02.0246503Z outputs = block( 2025-08-14T21:47:02.0246825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:02.0247189Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:02.0247572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0247942Z return func(*args, **kwargs) 2025-08-14T21:47:02.0248311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:47:02.0248734Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:47:02.0249129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:47:02.0249515Z hidden_states = self.act(hidden_states) 2025-08-14T21:47:02.0249866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:02.0250308Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:02.0250543Z 2025-08-14T21:47:02.0250623Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0250835Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0251047Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0251253Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0251460Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0251669Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0251898Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:02.0252264Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:02.0252591Z return mod(**inputs) 2025-08-14T21:47:02.0253040Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:02.0253436Z transformer_outputs = self.transformer( 2025-08-14T21:47:02.0253829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:02.0254202Z outputs = block( 2025-08-14T21:47:02.0254555Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:02.0254938Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:02.0255325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0255696Z return func(*args, **kwargs) 2025-08-14T21:47:02.0256063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:02.0256459Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:02.0256846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0257210Z return func(*args, **kwargs) 2025-08-14T21:47:02.0257577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:02.0257990Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:02.0258492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:47:02.0258988Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:47:02.0259182Z 2025-08-14T21:47:02.0259287Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:02.0259724Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:02.0260063Z return mod(**inputs) 2025-08-14T21:47:02.0260439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:02.0260855Z transformer_outputs = self.transformer( 2025-08-14T21:47:02.0261247Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:02.0261611Z outputs = block( 2025-08-14T21:47:02.0261942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:02.0262306Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:02.0262689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0263059Z return func(*args, **kwargs) 2025-08-14T21:47:02.0263435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:02.0263832Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:02.0264215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0264591Z return func(*args, **kwargs) 2025-08-14T21:47:02.0264964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:02.0265373Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:02.0265816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:47:02.0266281Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:47:02.0266447Z 2025-08-14T21:47:02.0266539Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0266754Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0266983Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:02.0267348Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:02.0267679Z return mod(**inputs) 2025-08-14T21:47:02.0268052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:02.0268472Z transformer_outputs = self.transformer( 2025-08-14T21:47:02.0268915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:02.0269318Z outputs = block( 2025-08-14T21:47:02.0269661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:02.0270040Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:02.0270419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0270786Z return func(*args, **kwargs) 2025-08-14T21:47:02.0271184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:47:02.0271596Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:47:02.0272009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:47:02.0272394Z hidden_states = self.act(hidden_states) 2025-08-14T21:47:02.0272797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:02.0273280Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:02.0273513Z 2025-08-14T21:47:02.0273636Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0273843Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0274055Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0274265Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0274463Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0274672Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0274913Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:02.0275268Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:02.0275594Z return mod(**inputs) 2025-08-14T21:47:02.0275958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:02.0276354Z transformer_outputs = self.transformer( 2025-08-14T21:47:02.0276742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:02.0277119Z outputs = block( 2025-08-14T21:47:02.0277457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:02.0277836Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:02.0278212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0278613Z return func(*args, **kwargs) 2025-08-14T21:47:02.0279001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:02.0279413Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:02.0279825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0280218Z return func(*args, **kwargs) 2025-08-14T21:47:02.0280605Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:02.0281033Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:02.0281503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:47:02.0282010Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:47:02.0282202Z 2025-08-14T21:47:02.0282314Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:02.0282699Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:02.0283045Z return mod(**inputs) 2025-08-14T21:47:02.0283429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:02.0283844Z transformer_outputs = self.transformer( 2025-08-14T21:47:02.0284255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:02.0284668Z outputs = block( 2025-08-14T21:47:02.0284998Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:02.0285388Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:02.0285785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0286185Z return func(*args, **kwargs) 2025-08-14T21:47:02.0286575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:02.0286995Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:02.0287450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0287869Z return func(*args, **kwargs) 2025-08-14T21:47:02.0288269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:02.0288728Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:02.0289194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:47:02.0289672Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:47:02.0289852Z 2025-08-14T21:47:02.0289939Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0290169Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0290421Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:02.0290802Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:02.0291152Z return mod(**inputs) 2025-08-14T21:47:02.0291536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:02.0291947Z transformer_outputs = self.transformer( 2025-08-14T21:47:02.0292365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:02.0292768Z outputs = block( 2025-08-14T21:47:02.0293105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:02.0293484Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:02.0293880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0294274Z return func(*args, **kwargs) 2025-08-14T21:47:02.0294658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:47:02.0295092Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:47:02.0295520Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:47:02.0295932Z hidden_states = self.act(hidden_states) 2025-08-14T21:47:02.0296294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:02.0296767Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:02.0297007Z 2025-08-14T21:47:02.0297097Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0297312Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0297530Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0297746Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0297964Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0298175Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0298425Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:02.0298808Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:02.0299150Z return mod(**inputs) 2025-08-14T21:47:02.0299597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:02.0300030Z transformer_outputs = self.transformer( 2025-08-14T21:47:02.0300449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:02.0300859Z outputs = block( 2025-08-14T21:47:02.0301302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:02.0301691Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:02.0302163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0302585Z return func(*args, **kwargs) 2025-08-14T21:47:02.0302980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:02.0303438Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:02.0303865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0304274Z return func(*args, **kwargs) 2025-08-14T21:47:02.0304671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:02.0305120Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:02.0305612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:47:02.0306127Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:47:02.0306336Z 2025-08-14T21:47:02.0306453Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:02.0306855Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:02.0307214Z return mod(**inputs) 2025-08-14T21:47:02.0307606Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:02.0308042Z transformer_outputs = self.transformer( 2025-08-14T21:47:02.0308473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:02.0308880Z outputs = block( 2025-08-14T21:47:02.0309225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:02.0309626Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:02.0310041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0310444Z return func(*args, **kwargs) 2025-08-14T21:47:02.0310851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:02.0311259Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:02.0311639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0311994Z return func(*args, **kwargs) 2025-08-14T21:47:02.0312354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:02.0312746Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:02.0313177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:47:02.0313623Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:47:02.0313787Z 2025-08-14T21:47:02.0313867Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0314077Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0314303Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:02.0314656Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:02.0314979Z return mod(**inputs) 2025-08-14T21:47:02.0315327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:02.0315715Z transformer_outputs = self.transformer( 2025-08-14T21:47:02.0316095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:02.0316459Z outputs = block( 2025-08-14T21:47:02.0316808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:02.0317185Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:02.0317563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0317951Z return func(*args, **kwargs) 2025-08-14T21:47:02.0318312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:47:02.0318720Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:47:02.0319125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:47:02.0319501Z hidden_states = self.act(hidden_states) 2025-08-14T21:47:02.0319854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:02.0320299Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:02.0320529Z 2025-08-14T21:47:02.0320618Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0320825Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0321037Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0321245Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0321442Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0321650Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0321886Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:02.0322243Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:02.0322567Z return mod(**inputs) 2025-08-14T21:47:02.0322926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:02.0323320Z transformer_outputs = self.transformer( 2025-08-14T21:47:02.0323697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:02.0324068Z outputs = block( 2025-08-14T21:47:02.0324390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:02.0324747Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:02.0325113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0325480Z return func(*args, **kwargs) 2025-08-14T21:47:02.0325845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:02.0326228Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:02.0326613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0326979Z return func(*args, **kwargs) 2025-08-14T21:47:02.0327342Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:02.0327732Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:02.0328167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:47:02.0328636Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:47:02.0328815Z 2025-08-14T21:47:02.0328926Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:02.0329275Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:02.0329599Z return mod(**inputs) 2025-08-14T21:47:02.0329995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:02.0330376Z transformer_outputs = self.transformer( 2025-08-14T21:47:02.0330793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:02.0331175Z outputs = block( 2025-08-14T21:47:02.0331487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:02.0331828Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:02.0332195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0332564Z return func(*args, **kwargs) 2025-08-14T21:47:02.0332925Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-08-14T21:47:02.0333316Z attn_output, self_attn_weights = self.attn( 2025-08-14T21:47:02.0333704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0334073Z return func(*args, **kwargs) 2025-08-14T21:47:02.0334432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-08-14T21:47:02.0334837Z attn_output, attn_weights = attention_interface( 2025-08-14T21:47:02.0335283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:47:02.0335739Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:47:02.0335900Z 2025-08-14T21:47:02.0335980Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0336193Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0336433Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:02.0336787Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:02.0337115Z return mod(**inputs) 2025-08-14T21:47:02.0337476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-08-14T21:47:02.0337885Z transformer_outputs = self.transformer( 2025-08-14T21:47:02.0338277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-08-14T21:47:02.0338647Z outputs = block( 2025-08-14T21:47:02.0338967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:02.0339322Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:02.0339790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:02.0340197Z return func(*args, **kwargs) 2025-08-14T21:47:02.0340593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-08-14T21:47:02.0341037Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-08-14T21:47:02.0341450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 366, in forward 2025-08-14T21:47:02.0342030Z hidden_states = self.act(hidden_states) 2025-08-14T21:47:02.0342383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:02.0342840Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:02.0343080Z 2025-08-14T21:47:02.0343161Z cudagraph partition due to non gpu ops 2025-08-14T21:47:02.0343407Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:02.0343766Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:02.0344096Z return mod(**inputs) 2025-08-14T21:47:02.0344551Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1494, in forward 2025-08-14T21:47:02.0344970Z logits = self.score(hidden_states) 2025-08-14T21:47:02.0345108Z 2025-08-14T21:47:02.0345216Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:02.0345613Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:02.0345939Z return mod(**inputs) 2025-08-14T21:47:02.0346294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1537, in forward 2025-08-14T21:47:02.0346756Z loss = loss_fct(pooled_logits.view(-1, self.num_labels), labels.view(-1)) 2025-08-14T21:47:02.0346964Z 2025-08-14T21:47:02.0347070Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:02.0347431Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:02.0347753Z return mod(**inputs) 2025-08-14T21:47:02.0348114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1537, in forward 2025-08-14T21:47:02.0348567Z loss = loss_fct(pooled_logits.view(-1, self.num_labels), labels.view(-1)) 2025-08-14T21:47:02.0348767Z 2025-08-14T21:47:04.2484071Z Compilation time (from dynamo_timed): 25.258608399 2025-08-14T21:47:04.2488057Z pass 2025-08-14T21:47:04.2488363Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:47:04.2489164Z TIMING: _recursive_pre_grad_passes:0.09232 _recursive_joint_graph_passes:0.79622 _recursive_post_grad_passes:0.15138 async_compile.wait:0.80764 code_gen:11.37607 inductor_compile:14.68001 backend_compile:21.23448 gc:0.00085 entire_frame_compile:25.25861 total_wall_time:25.25861 2025-08-14T21:47:04.2490255Z STATS: call_* op count: 1138 | FakeTensorMode.__torch_dispatch__:42150 | FakeTensor.__torch_dispatch__:7924 | ProxyTorchDispatchMode.__torch_dispatch__:8335 2025-08-14T21:47:04.2490757Z Dynamo produced 2 graphs covering 1138 ops with 0 graph breaks (0 unique) 2025-08-14T21:47:10.3082074Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:47:10.3083134Z from pkg_resources import resource_filename 2025-08-14T21:47:10.8966743Z 2025-08-14T21:47:12.0926109Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:47:12.0926416Z loading model: 0it [00:01, ?it/s] 2025-08-14T21:47:12.0934290Z cpu eval GoogleFnet 2025-08-14T21:47:12.5236813Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:47:12.6904082Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:47:12.8555783Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:47:19.3899129Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.3900195Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.3900614Z return mod(**inputs) 2025-08-14T21:47:19.3901066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.3901509Z outputs = self.fnet( 2025-08-14T21:47:19.3901925Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.3902361Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.3902795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.3903652Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.3904115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.3904597Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.3905106Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.3905568Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.3906025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.3906514Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.3906955Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.3907430Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.3907613Z 2025-08-14T21:47:19.3907745Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.3908162Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.3908540Z return mod(**inputs) 2025-08-14T21:47:19.3908944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.3909360Z outputs = self.fnet( 2025-08-14T21:47:19.3909752Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.3910194Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.3910624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.3911072Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.3911482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.3911901Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.3912335Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.3912799Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.3913263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.3913712Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.3914172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.3914627Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.3914816Z 2025-08-14T21:47:19.3914941Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.3915338Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.3915730Z return mod(**inputs) 2025-08-14T21:47:19.3916132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.3916559Z outputs = self.fnet( 2025-08-14T21:47:19.3916968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.3917416Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.3917835Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.3918272Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.3918684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.3919121Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.3919591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.3920072Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.3920570Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.3921930Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.3922363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.3922820Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.3922998Z 2025-08-14T21:47:19.3923115Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.3923503Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.3923846Z return mod(**inputs) 2025-08-14T21:47:19.3924254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.3924669Z outputs = self.fnet( 2025-08-14T21:47:19.3925100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.3925531Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.3926179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.3926626Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.3927036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.3927442Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.3927879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.3928330Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.3928776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.3929211Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.3929640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.3930135Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.3930311Z 2025-08-14T21:47:19.3930433Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.3930836Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.3931190Z return mod(**inputs) 2025-08-14T21:47:19.3931577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.3931975Z outputs = self.fnet( 2025-08-14T21:47:19.3932353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.3932764Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.3933162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.3933598Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.3933988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.3934381Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.3934776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.3935203Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.3935634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.3936051Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.3936503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.3936993Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.3937186Z 2025-08-14T21:47:19.3937305Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.3937695Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.3938051Z return mod(**inputs) 2025-08-14T21:47:19.3938438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.3938855Z outputs = self.fnet( 2025-08-14T21:47:19.3939236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.3939774Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.3940233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.3940692Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.3941096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.3941495Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.3942303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.3942760Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.3943191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.3943618Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.3944037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.3944487Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.3944657Z 2025-08-14T21:47:19.3944774Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.3945165Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.3945521Z return mod(**inputs) 2025-08-14T21:47:19.3945893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.3946304Z outputs = self.fnet( 2025-08-14T21:47:19.3946694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.3947107Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.3947510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.3947935Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.3948341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.3948731Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.3949129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.3949541Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.3949947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.3950331Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.3950717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.3951133Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.3951290Z 2025-08-14T21:47:19.3951402Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.3951883Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.3952243Z return mod(**inputs) 2025-08-14T21:47:19.3952605Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.3953014Z outputs = self.fnet( 2025-08-14T21:47:19.3953372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.3953764Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.3954146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.3954548Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.3954926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.3955299Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.3955688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.3956103Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.3956517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.3956912Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.3957307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.3957725Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.3957892Z 2025-08-14T21:47:19.3958003Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.3958375Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.3958702Z return mod(**inputs) 2025-08-14T21:47:19.3959075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.3959463Z outputs = self.fnet( 2025-08-14T21:47:19.3959815Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.3960207Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.3960591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.3960988Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.3961481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.3961860Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.3962251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.3962663Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.3963060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.3963458Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.3963854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.3964257Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.3964425Z 2025-08-14T21:47:19.3964529Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.3964898Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.3965230Z return mod(**inputs) 2025-08-14T21:47:19.3965578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.3965953Z outputs = self.fnet( 2025-08-14T21:47:19.3966349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.3966751Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.3967164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.3967591Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.3967977Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.3968362Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.3968777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.3969223Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.3969663Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.3970117Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.3970544Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.3970979Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.3971144Z 2025-08-14T21:47:19.3971255Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.3971648Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.3972006Z return mod(**inputs) 2025-08-14T21:47:19.3972385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.3972786Z outputs = self.fnet( 2025-08-14T21:47:19.3973162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.3973588Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.3973989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.3974410Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.3974801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.3975195Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.3975597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.3976049Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.3976489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.3976924Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.3977345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.3977787Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.3977961Z 2025-08-14T21:47:19.3978081Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.3978467Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.3978820Z return mod(**inputs) 2025-08-14T21:47:19.3979203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.3979691Z outputs = self.fnet( 2025-08-14T21:47:19.3980063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.3980479Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.3980928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.3981358Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.3981764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.3982176Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.3982588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.3983022Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.3983462Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.3983848Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.3984236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.3984635Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.3984802Z 2025-08-14T21:47:19.3984890Z cudagraph partition due to non gpu ops 2025-08-14T21:47:19.3985141Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.3985496Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.3985828Z return mod(**inputs) 2025-08-14T21:47:19.3986194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.3986601Z outputs = self.fnet( 2025-08-14T21:47:19.3986942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 512, in forward 2025-08-14T21:47:19.3987327Z embedding_output = self.embeddings( 2025-08-14T21:47:19.3987711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 142, in forward 2025-08-14T21:47:19.3988097Z embeddings = self.projection(embeddings) 2025-08-14T21:47:19.3988247Z 2025-08-14T21:47:19.3988329Z cudagraph partition due to non gpu ops 2025-08-14T21:47:19.3988572Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.3988939Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.3989279Z return mod(**inputs) 2025-08-14T21:47:19.3989659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.3990059Z outputs = self.fnet( 2025-08-14T21:47:19.3990427Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.3990830Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.3991230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.3991647Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.3992031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.3992421Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.3992830Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.3993266Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.3993685Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.3994096Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.3994509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.3994949Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.3995125Z 2025-08-14T21:47:19.3995236Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.3995684Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.3996053Z return mod(**inputs) 2025-08-14T21:47:19.3996426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.3996828Z outputs = self.fnet( 2025-08-14T21:47:19.3997200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.3997607Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.3998003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.3998422Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.3998814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.3999174Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.3999557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.3999968Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4000372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4000759Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4001144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4001591Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4001757Z 2025-08-14T21:47:19.4001868Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4002252Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4002597Z return mod(**inputs) 2025-08-14T21:47:19.4002976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4003365Z outputs = self.fnet( 2025-08-14T21:47:19.4003736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4004140Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4004528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4004943Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4005327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4005714Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4006112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4006544Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4006971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4007383Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4007785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4008220Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4008389Z 2025-08-14T21:47:19.4008509Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4008889Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4009232Z return mod(**inputs) 2025-08-14T21:47:19.4009602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4009996Z outputs = self.fnet( 2025-08-14T21:47:19.4010411Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4010837Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4011261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4011673Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4012066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4012458Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4012869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4013293Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4013731Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4014184Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4014599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4015038Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4015212Z 2025-08-14T21:47:19.4015303Z cudagraph partition due to non gpu ops 2025-08-14T21:47:19.4015562Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4015940Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4016289Z return mod(**inputs) 2025-08-14T21:47:19.4016664Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4017072Z outputs = self.fnet( 2025-08-14T21:47:19.4017461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4017879Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4018295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4018724Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4019127Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4019629Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4020068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 252, in forward 2025-08-14T21:47:19.4020506Z layer_output = apply_chunking_to_forward( 2025-08-14T21:47:19.4020963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:47:19.4021397Z return forward_fn(*input_tensors) 2025-08-14T21:47:19.4021839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 261, in feed_forward_chunk 2025-08-14T21:47:19.4022310Z intermediate_output = self.intermediate(fourier_output) 2025-08-14T21:47:19.4022737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 220, in forward 2025-08-14T21:47:19.4023165Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:47:19.4023548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:19.4024009Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:19.4024250Z 2025-08-14T21:47:19.4024334Z cudagraph partition due to non gpu ops 2025-08-14T21:47:19.4024557Z cudagraph partition due to non gpu ops 2025-08-14T21:47:19.4024790Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4025199Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4025550Z return mod(**inputs) 2025-08-14T21:47:19.4025906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4026308Z outputs = self.fnet( 2025-08-14T21:47:19.4026669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4027050Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4027422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4027820Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4028186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4028558Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4028939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4029350Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4029755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4030140Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4030528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4030938Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4031096Z 2025-08-14T21:47:19.4031213Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4031573Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4031906Z return mod(**inputs) 2025-08-14T21:47:19.4032264Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4032642Z outputs = self.fnet( 2025-08-14T21:47:19.4032988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4033374Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4033749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4034136Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4034506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4034870Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4035253Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4035657Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4036058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4036449Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4036826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4037235Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4037396Z 2025-08-14T21:47:19.4037500Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4037863Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4038184Z return mod(**inputs) 2025-08-14T21:47:19.4038538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4038954Z outputs = self.fnet( 2025-08-14T21:47:19.4039305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4039710Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4040103Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4040496Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4040853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4041212Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4041592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4042218Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4042632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4043026Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4043437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4043866Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4044032Z 2025-08-14T21:47:19.4044137Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4044502Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4044838Z return mod(**inputs) 2025-08-14T21:47:19.4045192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4045573Z outputs = self.fnet( 2025-08-14T21:47:19.4045930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4046321Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4046694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4047095Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4047470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4047828Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4048215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4048627Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4049032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4049418Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4049813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4050230Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4050387Z 2025-08-14T21:47:19.4050470Z cudagraph partition due to non gpu ops 2025-08-14T21:47:19.4050716Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4051083Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4051416Z return mod(**inputs) 2025-08-14T21:47:19.4051767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4052144Z outputs = self.fnet( 2025-08-14T21:47:19.4052499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4052878Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4053393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4053824Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4054209Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4054619Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4055038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 252, in forward 2025-08-14T21:47:19.4055472Z layer_output = apply_chunking_to_forward( 2025-08-14T21:47:19.4055906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:47:19.4056329Z return forward_fn(*input_tensors) 2025-08-14T21:47:19.4056768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 261, in feed_forward_chunk 2025-08-14T21:47:19.4057269Z intermediate_output = self.intermediate(fourier_output) 2025-08-14T21:47:19.4057713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 220, in forward 2025-08-14T21:47:19.4058173Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:47:19.4058584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:19.4059069Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:19.4059319Z 2025-08-14T21:47:19.4059408Z cudagraph partition due to non gpu ops 2025-08-14T21:47:19.4059719Z cudagraph partition due to non gpu ops 2025-08-14T21:47:19.4059984Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4060382Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4060743Z return mod(**inputs) 2025-08-14T21:47:19.4061130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4061533Z outputs = self.fnet( 2025-08-14T21:47:19.4061901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4062310Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4062715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4063141Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4063532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4063928Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4064342Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4064782Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4065216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4065633Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4066044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4066477Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4066656Z 2025-08-14T21:47:19.4066769Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4067157Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4067508Z return mod(**inputs) 2025-08-14T21:47:19.4067880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4068332Z outputs = self.fnet( 2025-08-14T21:47:19.4068709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4069131Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4069531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4069928Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4070297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4070650Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4071036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4071446Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4071843Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4072244Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4072636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4073051Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4073209Z 2025-08-14T21:47:19.4073315Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4073681Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4074013Z return mod(**inputs) 2025-08-14T21:47:19.4074375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4074746Z outputs = self.fnet( 2025-08-14T21:47:19.4075103Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4075495Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4075874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4076280Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4076651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4077015Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4077395Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4077808Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4078215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4078595Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4078965Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4079358Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4079510Z 2025-08-14T21:47:19.4079619Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4079969Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4080289Z return mod(**inputs) 2025-08-14T21:47:19.4080638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4081006Z outputs = self.fnet( 2025-08-14T21:47:19.4081350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4081727Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4082141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4082524Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4082902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4083332Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4083707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4084094Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4084485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4084864Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4085230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4085632Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4085809Z 2025-08-14T21:47:19.4085890Z cudagraph partition due to non gpu ops 2025-08-14T21:47:19.4086129Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4086472Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4086789Z return mod(**inputs) 2025-08-14T21:47:19.4087139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4087504Z outputs = self.fnet( 2025-08-14T21:47:19.4087842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4088212Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4088579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4088958Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4089315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4089667Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4090043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 252, in forward 2025-08-14T21:47:19.4090426Z layer_output = apply_chunking_to_forward( 2025-08-14T21:47:19.4090823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:47:19.4091210Z return forward_fn(*input_tensors) 2025-08-14T21:47:19.4091605Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 261, in feed_forward_chunk 2025-08-14T21:47:19.4092043Z intermediate_output = self.intermediate(fourier_output) 2025-08-14T21:47:19.4092467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 220, in forward 2025-08-14T21:47:19.4092878Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:47:19.4093247Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:19.4093693Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:19.4093927Z 2025-08-14T21:47:19.4094007Z cudagraph partition due to non gpu ops 2025-08-14T21:47:19.4094218Z cudagraph partition due to non gpu ops 2025-08-14T21:47:19.4094443Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4094796Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4095118Z return mod(**inputs) 2025-08-14T21:47:19.4095464Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4095886Z outputs = self.fnet( 2025-08-14T21:47:19.4096236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4096627Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4097031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4097437Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4097812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4098178Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4098577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4098997Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4099415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4099893Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4100312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4100759Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4100926Z 2025-08-14T21:47:19.4101051Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4101431Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4101784Z return mod(**inputs) 2025-08-14T21:47:19.4102172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4102575Z outputs = self.fnet( 2025-08-14T21:47:19.4102966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4103390Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4103799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4104221Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4104613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4105008Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4105423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4105854Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4106258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4106653Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4107037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4107452Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4107607Z 2025-08-14T21:47:19.4107722Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4108088Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4108413Z return mod(**inputs) 2025-08-14T21:47:19.4108792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4109204Z outputs = self.fnet( 2025-08-14T21:47:19.4109577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4109988Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4110441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4110847Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4111229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4111613Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4112038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4112448Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4112860Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4113258Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4113653Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4114068Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4114234Z 2025-08-14T21:47:19.4114345Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4114736Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4115107Z return mod(**inputs) 2025-08-14T21:47:19.4115489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4115899Z outputs = self.fnet( 2025-08-14T21:47:19.4116286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4116704Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4117116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4117548Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4117949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4118334Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4118751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4119190Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4119612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4120050Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4120467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4120959Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4121122Z 2025-08-14T21:47:19.4121210Z cudagraph partition due to non gpu ops 2025-08-14T21:47:19.4121462Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4121838Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4122176Z return mod(**inputs) 2025-08-14T21:47:19.4122535Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4122923Z outputs = self.fnet( 2025-08-14T21:47:19.4123284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4123670Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4124057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4124463Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4124837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4125251Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4125642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 252, in forward 2025-08-14T21:47:19.4126056Z layer_output = apply_chunking_to_forward( 2025-08-14T21:47:19.4126474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:47:19.4126874Z return forward_fn(*input_tensors) 2025-08-14T21:47:19.4127287Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 261, in feed_forward_chunk 2025-08-14T21:47:19.4127781Z intermediate_output = self.intermediate(fourier_output) 2025-08-14T21:47:19.4128217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 220, in forward 2025-08-14T21:47:19.4128659Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:47:19.4129044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:19.4129733Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:19.4129990Z 2025-08-14T21:47:19.4130080Z cudagraph partition due to non gpu ops 2025-08-14T21:47:19.4130316Z cudagraph partition due to non gpu ops 2025-08-14T21:47:19.4130578Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4130959Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4131313Z return mod(**inputs) 2025-08-14T21:47:19.4131696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4132099Z outputs = self.fnet( 2025-08-14T21:47:19.4132473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4132884Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4133293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4133708Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4134098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4134478Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4134883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4135309Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4135736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4136152Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4136568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4136995Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4137168Z 2025-08-14T21:47:19.4137284Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4137670Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4138009Z return mod(**inputs) 2025-08-14T21:47:19.4138386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4138791Z outputs = self.fnet( 2025-08-14T21:47:19.4139166Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4139605Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4140044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4140448Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4140851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4141263Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4141674Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4142279Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4142712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4143149Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4143567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4144012Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4144178Z 2025-08-14T21:47:19.4144290Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4144681Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4145029Z return mod(**inputs) 2025-08-14T21:47:19.4145399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4145798Z outputs = self.fnet( 2025-08-14T21:47:19.4146152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4146536Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4146907Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4147302Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4147676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4148034Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4148423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4148833Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4149239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4149622Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4150007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4150414Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4150570Z 2025-08-14T21:47:19.4150680Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4151041Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4151368Z return mod(**inputs) 2025-08-14T21:47:19.4151725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4152098Z outputs = self.fnet( 2025-08-14T21:47:19.4152451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4152838Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4153218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4153607Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4153978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4154347Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4154822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4155271Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4155673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4156095Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4156479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4156897Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4157061Z 2025-08-14T21:47:19.4157144Z cudagraph partition due to non gpu ops 2025-08-14T21:47:19.4157385Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4157742Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4158089Z return mod(**inputs) 2025-08-14T21:47:19.4158443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4158813Z outputs = self.fnet( 2025-08-14T21:47:19.4159171Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4159602Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4159979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4160366Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4160734Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4161098Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4161475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 252, in forward 2025-08-14T21:47:19.4161871Z layer_output = apply_chunking_to_forward( 2025-08-14T21:47:19.4162280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:47:19.4162682Z return forward_fn(*input_tensors) 2025-08-14T21:47:19.4163083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 261, in feed_forward_chunk 2025-08-14T21:47:19.4163537Z intermediate_output = self.intermediate(fourier_output) 2025-08-14T21:47:19.4163960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 220, in forward 2025-08-14T21:47:19.4164381Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:47:19.4164774Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:19.4165220Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:19.4165449Z 2025-08-14T21:47:19.4165539Z cudagraph partition due to non gpu ops 2025-08-14T21:47:19.4165741Z cudagraph partition due to non gpu ops 2025-08-14T21:47:19.4165974Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4166325Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4166644Z return mod(**inputs) 2025-08-14T21:47:19.4166986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4167349Z outputs = self.fnet( 2025-08-14T21:47:19.4167693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4168059Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4168462Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4168857Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4169246Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4169608Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4169981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4170384Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4170773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4171144Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4171517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4171919Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4172072Z 2025-08-14T21:47:19.4172176Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4172535Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4172863Z return mod(**inputs) 2025-08-14T21:47:19.4173226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4173582Z outputs = self.fnet( 2025-08-14T21:47:19.4173925Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4174297Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4174656Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4175040Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4175400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4175761Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4176136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4176544Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4176948Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4177337Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4177714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4178129Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4178282Z 2025-08-14T21:47:19.4178394Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4178753Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4179082Z return mod(**inputs) 2025-08-14T21:47:19.4179436Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4179898Z outputs = self.fnet( 2025-08-14T21:47:19.4180268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4180674Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4181073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4181484Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4181857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4182224Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4182692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4183135Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4183581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4184010Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4184423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4184866Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4185043Z 2025-08-14T21:47:19.4185159Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4185547Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4185887Z return mod(**inputs) 2025-08-14T21:47:19.4186272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4186675Z outputs = self.fnet( 2025-08-14T21:47:19.4187054Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4187459Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4187863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4188297Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4188687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4189076Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4189485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4189902Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4190293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4190682Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4191065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4191470Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4191623Z 2025-08-14T21:47:19.4191706Z cudagraph partition due to non gpu ops 2025-08-14T21:47:19.4191948Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4192311Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4192633Z return mod(**inputs) 2025-08-14T21:47:19.4192987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4193367Z outputs = self.fnet( 2025-08-14T21:47:19.4193721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4194106Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4194496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4194897Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4195264Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4195622Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4196004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 252, in forward 2025-08-14T21:47:19.4196394Z layer_output = apply_chunking_to_forward( 2025-08-14T21:47:19.4196832Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:47:19.4197240Z return forward_fn(*input_tensors) 2025-08-14T21:47:19.4197644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 261, in feed_forward_chunk 2025-08-14T21:47:19.4198109Z intermediate_output = self.intermediate(fourier_output) 2025-08-14T21:47:19.4198515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 220, in forward 2025-08-14T21:47:19.4198930Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:47:19.4199311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:19.4199765Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:19.4200008Z 2025-08-14T21:47:19.4200094Z cudagraph partition due to non gpu ops 2025-08-14T21:47:19.4200326Z cudagraph partition due to non gpu ops 2025-08-14T21:47:19.4200571Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4200928Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4201262Z return mod(**inputs) 2025-08-14T21:47:19.4201645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4202047Z outputs = self.fnet( 2025-08-14T21:47:19.4202439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4202851Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4203284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4203679Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4204049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4204413Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4204798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4205203Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4205645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4206042Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4206426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4206847Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4207013Z 2025-08-14T21:47:19.4207120Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4207493Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4207818Z return mod(**inputs) 2025-08-14T21:47:19.4208179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4208565Z outputs = self.fnet( 2025-08-14T21:47:19.4208913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4209304Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4209688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4210087Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4210453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4210821Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4211269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4211727Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4212174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4212567Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4212952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4213414Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4213591Z 2025-08-14T21:47:19.4213703Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4214087Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4214447Z return mod(**inputs) 2025-08-14T21:47:19.4214829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4215240Z outputs = self.fnet( 2025-08-14T21:47:19.4215624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4216044Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4216441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4216867Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4217279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4217660Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4218073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4218523Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4218967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4219399Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4219951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4220410Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4220583Z 2025-08-14T21:47:19.4220700Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4221108Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4221456Z return mod(**inputs) 2025-08-14T21:47:19.4221835Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4222238Z outputs = self.fnet( 2025-08-14T21:47:19.4222618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4223030Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4223421Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4223842Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4224234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4224618Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4225015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4225459Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4225937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4226361Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4226803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4227284Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4227457Z 2025-08-14T21:47:19.4227559Z cudagraph partition due to non gpu ops 2025-08-14T21:47:19.4227819Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4228224Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4228587Z return mod(**inputs) 2025-08-14T21:47:19.4228981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4229399Z outputs = self.fnet( 2025-08-14T21:47:19.4229797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4230245Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4230657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4231094Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4231498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4231899Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4232313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 252, in forward 2025-08-14T21:47:19.4232746Z layer_output = apply_chunking_to_forward( 2025-08-14T21:47:19.4233197Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:47:19.4233643Z return forward_fn(*input_tensors) 2025-08-14T21:47:19.4234090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 261, in feed_forward_chunk 2025-08-14T21:47:19.4234595Z intermediate_output = self.intermediate(fourier_output) 2025-08-14T21:47:19.4235066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 220, in forward 2025-08-14T21:47:19.4235517Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:47:19.4235937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:19.4236438Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:19.4236696Z 2025-08-14T21:47:19.4236798Z cudagraph partition due to non gpu ops 2025-08-14T21:47:19.4237026Z cudagraph partition due to non gpu ops 2025-08-14T21:47:19.4237293Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4237700Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4238062Z return mod(**inputs) 2025-08-14T21:47:19.4238452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4238869Z outputs = self.fnet( 2025-08-14T21:47:19.4239263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4239703Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4240200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4240718Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4241385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4242177Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4242747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4243352Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4243870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4244383Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4244908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4245416Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4245652Z 2025-08-14T21:47:19.4245794Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4246294Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4246731Z return mod(**inputs) 2025-08-14T21:47:19.4247168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4247675Z outputs = self.fnet( 2025-08-14T21:47:19.4263887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4264343Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4264740Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4265207Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4265666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4266035Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4266431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4266853Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4267262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4267692Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4268080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4268499Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4268661Z 2025-08-14T21:47:19.4268781Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4269144Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4269480Z return mod(**inputs) 2025-08-14T21:47:19.4269837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4270204Z outputs = self.fnet( 2025-08-14T21:47:19.4270560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4270943Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4271322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4271707Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4272073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4272436Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4272795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4273183Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4273704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4274088Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4274498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4274929Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4275080Z 2025-08-14T21:47:19.4275190Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4275541Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4275850Z return mod(**inputs) 2025-08-14T21:47:19.4276188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4276545Z outputs = self.fnet( 2025-08-14T21:47:19.4276875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4277245Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4277609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4277988Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4278335Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4278685Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4279056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4279435Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4279825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4280202Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4280581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4280970Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4281131Z 2025-08-14T21:47:19.4281211Z cudagraph partition due to non gpu ops 2025-08-14T21:47:19.4281452Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4281807Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4282118Z return mod(**inputs) 2025-08-14T21:47:19.4282463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4282822Z outputs = self.fnet( 2025-08-14T21:47:19.4283157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4283527Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4283893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4284279Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4284626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4284976Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4285350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 252, in forward 2025-08-14T21:47:19.4285722Z layer_output = apply_chunking_to_forward( 2025-08-14T21:47:19.4286116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:47:19.4286502Z return forward_fn(*input_tensors) 2025-08-14T21:47:19.4286901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 261, in feed_forward_chunk 2025-08-14T21:47:19.4287378Z intermediate_output = self.intermediate(fourier_output) 2025-08-14T21:47:19.4287791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 220, in forward 2025-08-14T21:47:19.4288212Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:47:19.4288599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:19.4289035Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:19.4289263Z 2025-08-14T21:47:19.4289342Z cudagraph partition due to non gpu ops 2025-08-14T21:47:19.4289552Z cudagraph partition due to non gpu ops 2025-08-14T21:47:19.4289775Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4290128Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4290444Z return mod(**inputs) 2025-08-14T21:47:19.4290794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4291152Z outputs = self.fnet( 2025-08-14T21:47:19.4291495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4291863Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4292221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4292603Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4292955Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4293303Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4293666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4294063Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4294457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4294834Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4295199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4295595Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4295746Z 2025-08-14T21:47:19.4295855Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4296192Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4296509Z return mod(**inputs) 2025-08-14T21:47:19.4296856Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4297230Z outputs = self.fnet( 2025-08-14T21:47:19.4297566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4297940Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4298305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4298677Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4299032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4299377Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4299841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4300271Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4300765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4301196Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4301646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4302072Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4302231Z 2025-08-14T21:47:19.4302332Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4302712Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4303065Z return mod(**inputs) 2025-08-14T21:47:19.4303456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4303860Z outputs = self.fnet( 2025-08-14T21:47:19.4304239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4304648Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4305049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4305471Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4305855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4306242Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4306648Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4307085Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4307502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4307875Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4308244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4308633Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4308780Z 2025-08-14T21:47:19.4308880Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4309228Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4309541Z return mod(**inputs) 2025-08-14T21:47:19.4309870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4310236Z outputs = self.fnet( 2025-08-14T21:47:19.4310573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4310935Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4311291Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4311668Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4312021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4312357Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4312723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4313110Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4313493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4313856Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4314225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4314612Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4314794Z 2025-08-14T21:47:19.4314885Z cudagraph partition due to non gpu ops 2025-08-14T21:47:19.4315132Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4315481Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4315814Z return mod(**inputs) 2025-08-14T21:47:19.4316148Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4316515Z outputs = self.fnet( 2025-08-14T21:47:19.4316759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4316831Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4317063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4317156Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4317369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4317447Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4317687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 252, in forward 2025-08-14T21:47:19.4317772Z layer_output = apply_chunking_to_forward( 2025-08-14T21:47:19.4318030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:47:19.4318108Z return forward_fn(*input_tensors) 2025-08-14T21:47:19.4318371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 261, in feed_forward_chunk 2025-08-14T21:47:19.4318494Z intermediate_output = self.intermediate(fourier_output) 2025-08-14T21:47:19.4318731Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 220, in forward 2025-08-14T21:47:19.4318840Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:47:19.4319053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:19.4319228Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:19.4319232Z 2025-08-14T21:47:19.4319321Z cudagraph partition due to non gpu ops 2025-08-14T21:47:19.4319398Z cudagraph partition due to non gpu ops 2025-08-14T21:47:19.4319501Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4319713Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4319777Z return mod(**inputs) 2025-08-14T21:47:19.4320023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4320092Z outputs = self.fnet( 2025-08-14T21:47:19.4320336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4320421Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4320654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4320746Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4320957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4321032Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4321273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4321369Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4321633Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4321723Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4321974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4322100Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4322104Z 2025-08-14T21:47:19.4322207Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4322397Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4322468Z return mod(**inputs) 2025-08-14T21:47:19.4322701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4322772Z outputs = self.fnet( 2025-08-14T21:47:19.4323003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4323078Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4323317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4323401Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4323609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4323692Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4323922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4324021Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4324251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4324329Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4324583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4324681Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4324684Z 2025-08-14T21:47:19.4324790Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4324981Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4325044Z return mod(**inputs) 2025-08-14T21:47:19.4325281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4325344Z outputs = self.fnet( 2025-08-14T21:47:19.4325575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4325655Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4325886Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4325975Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4326188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4326263Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4326500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4326592Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4326822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4326905Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4327134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4327239Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4327275Z 2025-08-14T21:47:19.4327376Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4327584Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4327673Z return mod(**inputs) 2025-08-14T21:47:19.4327906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4327980Z outputs = self.fnet( 2025-08-14T21:47:19.4328211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4328281Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4328517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4328598Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4328810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4328896Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4329128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4329229Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4329467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4329541Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4329775Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4329869Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4329872Z 2025-08-14T21:47:19.4330055Z cudagraph partition due to non gpu ops 2025-08-14T21:47:19.4330154Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4330355Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4330419Z return mod(**inputs) 2025-08-14T21:47:19.4330656Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4330722Z outputs = self.fnet( 2025-08-14T21:47:19.4330952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4331028Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4331260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4331344Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4331563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4331641Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4331885Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 252, in forward 2025-08-14T21:47:19.4331970Z layer_output = apply_chunking_to_forward( 2025-08-14T21:47:19.4332218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:47:19.4332300Z return forward_fn(*input_tensors) 2025-08-14T21:47:19.4332568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 261, in feed_forward_chunk 2025-08-14T21:47:19.4332688Z intermediate_output = self.intermediate(fourier_output) 2025-08-14T21:47:19.4332923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 220, in forward 2025-08-14T21:47:19.4333031Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:47:19.4333291Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:19.4333480Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:19.4333500Z 2025-08-14T21:47:19.4333579Z cudagraph partition due to non gpu ops 2025-08-14T21:47:19.4333662Z cudagraph partition due to non gpu ops 2025-08-14T21:47:19.4333763Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4333961Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4334023Z return mod(**inputs) 2025-08-14T21:47:19.4334261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4334333Z outputs = self.fnet( 2025-08-14T21:47:19.4334566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4334638Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4334883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4334967Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4335190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4335268Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4335501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4335602Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4335836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4335921Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4336154Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4336254Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4336257Z 2025-08-14T21:47:19.4336366Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4336556Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4336619Z return mod(**inputs) 2025-08-14T21:47:19.4336862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4336925Z outputs = self.fnet( 2025-08-14T21:47:19.4337166Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4337237Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4337474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4337564Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4337773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4337859Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4338093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4338184Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4338427Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4338503Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4338736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4338877Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4338881Z 2025-08-14T21:47:19.4338981Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4339200Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4339318Z return mod(**inputs) 2025-08-14T21:47:19.4339672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4339760Z outputs = self.fnet( 2025-08-14T21:47:19.4340029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4340108Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4340384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4340475Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4340723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4340812Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4341083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4341196Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4341468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4341565Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4342007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4342124Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4342129Z 2025-08-14T21:47:19.4342257Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4342483Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4342559Z return mod(**inputs) 2025-08-14T21:47:19.4342840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4342916Z outputs = self.fnet( 2025-08-14T21:47:19.4343201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4343286Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4343540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4343633Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4343851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4343937Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4344182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4344278Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4344528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4344611Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4344860Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4344966Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4344970Z 2025-08-14T21:47:19.4345046Z cudagraph partition due to non gpu ops 2025-08-14T21:47:19.4345153Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4345348Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4345524Z return mod(**inputs) 2025-08-14T21:47:19.4345770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4345862Z outputs = self.fnet( 2025-08-14T21:47:19.4346096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4346201Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4346440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4346530Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4346738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4346813Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4347056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 252, in forward 2025-08-14T21:47:19.4347138Z layer_output = apply_chunking_to_forward( 2025-08-14T21:47:19.4347394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:47:19.4347472Z return forward_fn(*input_tensors) 2025-08-14T21:47:19.4347737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 261, in feed_forward_chunk 2025-08-14T21:47:19.4347856Z intermediate_output = self.intermediate(fourier_output) 2025-08-14T21:47:19.4348090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 220, in forward 2025-08-14T21:47:19.4348196Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:47:19.4348406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:19.4348576Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:19.4348581Z 2025-08-14T21:47:19.4348664Z cudagraph partition due to non gpu ops 2025-08-14T21:47:19.4348738Z cudagraph partition due to non gpu ops 2025-08-14T21:47:19.4348839Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4349036Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4349099Z return mod(**inputs) 2025-08-14T21:47:19.4349341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4349405Z outputs = self.fnet( 2025-08-14T21:47:19.4349635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4349715Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4349949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4350031Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4350250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4350326Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4350564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4350659Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4350891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4350977Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4351209Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4351341Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4351354Z 2025-08-14T21:47:19.4351454Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4351663Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4351752Z return mod(**inputs) 2025-08-14T21:47:19.4351990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4352053Z outputs = self.fnet( 2025-08-14T21:47:19.4352301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4352371Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4352619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4352700Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4352914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4352999Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4353239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4353333Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4353577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4353653Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4353897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4353994Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4353997Z 2025-08-14T21:47:19.4354096Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4354298Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4354363Z return mod(**inputs) 2025-08-14T21:47:19.4354609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4354673Z outputs = self.fnet( 2025-08-14T21:47:19.4354911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4354988Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4355225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4355305Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4355526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4355603Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4355850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4355943Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4356189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4356274Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4356508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4356604Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4356616Z 2025-08-14T21:47:19.4356715Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4356906Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4356977Z return mod(**inputs) 2025-08-14T21:47:19.4357267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4357346Z outputs = self.fnet( 2025-08-14T21:47:19.4357586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4357677Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4357923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4358004Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4358218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4358300Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4358537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-08-14T21:47:19.4358631Z self_fourier_outputs = self.fourier(hidden_states) 2025-08-14T21:47:19.4358872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-08-14T21:47:19.4358950Z self_outputs = self.self(hidden_states) 2025-08-14T21:47:19.4359197Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-08-14T21:47:19.4359294Z outputs = self.fourier_transform(hidden_states).real 2025-08-14T21:47:19.4359297Z 2025-08-14T21:47:19.4359372Z cudagraph partition due to non gpu ops 2025-08-14T21:47:19.4359479Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4359670Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4359733Z return mod(**inputs) 2025-08-14T21:47:19.4359976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-08-14T21:47:19.4360043Z outputs = self.fnet( 2025-08-14T21:47:19.4360282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-08-14T21:47:19.4360353Z encoder_outputs = self.encoder( 2025-08-14T21:47:19.4360591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-08-14T21:47:19.4360681Z layer_outputs = layer_module(hidden_states) 2025-08-14T21:47:19.4360891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:19.4360974Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:19.4361210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 252, in forward 2025-08-14T21:47:19.4361291Z layer_output = apply_chunking_to_forward( 2025-08-14T21:47:19.4361553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:47:19.4361629Z return forward_fn(*input_tensors) 2025-08-14T21:47:19.4361897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 261, in feed_forward_chunk 2025-08-14T21:47:19.4362017Z intermediate_output = self.intermediate(fourier_output) 2025-08-14T21:47:19.4362255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 220, in forward 2025-08-14T21:47:19.4362372Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:47:19.4362576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-08-14T21:47:19.4362746Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-08-14T21:47:19.4362750Z 2025-08-14T21:47:19.4362834Z cudagraph partition due to non gpu ops 2025-08-14T21:47:19.4362944Z cudagraph partition due to non gpu ops 2025-08-14T21:47:19.4363027Z cudagraph partition due to non gpu ops 2025-08-14T21:47:19.4363147Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:19.4363337Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:19.4363427Z return mod(**inputs) 2025-08-14T21:47:19.4363669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 686, in forward 2025-08-14T21:47:19.4363852Z masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1)) 2025-08-14T21:47:19.4363855Z 2025-08-14T21:47:28.1785776Z Compilation time (from dynamo_timed): 14.002107448 2025-08-14T21:47:28.1859753Z pass 2025-08-14T21:47:28.1860263Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:47:28.1861210Z TIMING: _recursive_pre_grad_passes:0.0257 _recursive_joint_graph_passes:0.22071 _recursive_post_grad_passes:0.07609 async_compile.wait:0.82195 code_gen:8.3792 inductor_compile:9.79121 backend_compile:12.23714 gc:0.00157 entire_frame_compile:14.00211 total_wall_time:14.00211 2025-08-14T21:47:28.1862242Z STATS: call_* op count: 232 | FakeTensorMode.__torch_dispatch__:14364 | FakeTensor.__torch_dispatch__:3342 | ProxyTorchDispatchMode.__torch_dispatch__:2923 2025-08-14T21:47:28.1862806Z Dynamo produced 1 graphs covering 232 ops with 0 graph breaks (0 unique) 2025-08-14T21:47:33.7811841Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:47:33.7813768Z from pkg_resources import resource_filename 2025-08-14T21:47:34.3692244Z 2025-08-14T21:47:35.8629504Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:47:35.8629835Z loading model: 0it [00:01, ?it/s] 2025-08-14T21:47:35.8647129Z cpu eval LayoutLMForMaskedLM 2025-08-14T21:47:36.5427927Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:47:36.7876735Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:47:37.0245344Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:47:47.9141240Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9141609Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9142137Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9142369Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9142590Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9142815Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9143038Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9143294Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9143515Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9143744Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9144022Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:47.9144440Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:47.9144811Z return mod(**inputs) 2025-08-14T21:47:47.9145231Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9145655Z return func(*args, **kwargs) 2025-08-14T21:47:47.9146081Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9146477Z return func(*args, **kwargs) 2025-08-14T21:47:47.9146847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:47:47.9147608Z output = func(self, *args, **kwargs) 2025-08-14T21:47:47.9148075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 757, in forward 2025-08-14T21:47:47.9148684Z outputs = self.layoutlm( 2025-08-14T21:47:47.9149073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9149479Z return func(*args, **kwargs) 2025-08-14T21:47:47.9149878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9150271Z return func(*args, **kwargs) 2025-08-14T21:47:47.9150631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:47:47.9151017Z output = func(self, *args, **kwargs) 2025-08-14T21:47:47.9151458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:47:47.9151897Z encoder_outputs = self.encoder( 2025-08-14T21:47:47.9152295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9152697Z return func(*args, **kwargs) 2025-08-14T21:47:47.9153086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9153476Z return func(*args, **kwargs) 2025-08-14T21:47:47.9153866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9154263Z return func(*args, **kwargs) 2025-08-14T21:47:47.9154474Z [Previous line repeated 1 more time] 2025-08-14T21:47:47.9154846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:47:47.9155230Z output = func(self, *args, **kwargs) 2025-08-14T21:47:47.9155668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:47:47.9156102Z layer_outputs = layer_module( 2025-08-14T21:47:47.9156480Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:47.9156868Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:47.9157279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9157691Z return func(*args, **kwargs) 2025-08-14T21:47:47.9158082Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9158483Z return func(*args, **kwargs) 2025-08-14T21:47:47.9158864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9159271Z return func(*args, **kwargs) 2025-08-14T21:47:47.9159710Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:47:47.9160167Z layer_output = apply_chunking_to_forward( 2025-08-14T21:47:47.9160595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:47:47.9161016Z return forward_fn(*input_tensors) 2025-08-14T21:47:47.9161474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:47:47.9161991Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:47:47.9162464Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:47:47.9162991Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:47:47.9163411Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:47:47.9163796Z return self.act(input) 2025-08-14T21:47:47.9163936Z 2025-08-14T21:47:47.9164049Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9164277Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9164500Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9164712Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9164930Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9165260Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9165470Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9165690Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9165910Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9166118Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9166336Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9166607Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:47.9166980Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:47.9167350Z return mod(**inputs) 2025-08-14T21:47:47.9167730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9168135Z return func(*args, **kwargs) 2025-08-14T21:47:47.9168509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9168907Z return func(*args, **kwargs) 2025-08-14T21:47:47.9169269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:47:47.9169660Z output = func(self, *args, **kwargs) 2025-08-14T21:47:47.9170057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 757, in forward 2025-08-14T21:47:47.9170461Z outputs = self.layoutlm( 2025-08-14T21:47:47.9170825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9171208Z return func(*args, **kwargs) 2025-08-14T21:47:47.9171600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9171966Z return func(*args, **kwargs) 2025-08-14T21:47:47.9172305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:47:47.9172656Z output = func(self, *args, **kwargs) 2025-08-14T21:47:47.9173083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:47:47.9173514Z encoder_outputs = self.encoder( 2025-08-14T21:47:47.9173901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9174294Z return func(*args, **kwargs) 2025-08-14T21:47:47.9174671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9175061Z return func(*args, **kwargs) 2025-08-14T21:47:47.9175431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9175821Z return func(*args, **kwargs) 2025-08-14T21:47:47.9176031Z [Previous line repeated 1 more time] 2025-08-14T21:47:47.9176398Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:47:47.9176770Z output = func(self, *args, **kwargs) 2025-08-14T21:47:47.9177193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:47:47.9177687Z layer_outputs = layer_module( 2025-08-14T21:47:47.9178057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:47.9178461Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:47.9178880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9179267Z return func(*args, **kwargs) 2025-08-14T21:47:47.9179764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9180162Z return func(*args, **kwargs) 2025-08-14T21:47:47.9180544Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9180932Z return func(*args, **kwargs) 2025-08-14T21:47:47.9181332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:47:47.9181754Z layer_output = apply_chunking_to_forward( 2025-08-14T21:47:47.9182176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:47:47.9182590Z return forward_fn(*input_tensors) 2025-08-14T21:47:47.9183046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:47:47.9183561Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:47:47.9184045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:47:47.9184532Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:47:47.9184948Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:47:47.9185312Z return self.act(input) 2025-08-14T21:47:47.9185436Z 2025-08-14T21:47:47.9185524Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9185758Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9185982Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9186194Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9186416Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9186634Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9186851Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9187060Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9187278Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9187495Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9187704Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9187962Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:47.9188352Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:47.9188695Z return mod(**inputs) 2025-08-14T21:47:47.9189077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9189455Z return func(*args, **kwargs) 2025-08-14T21:47:47.9189806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9190160Z return func(*args, **kwargs) 2025-08-14T21:47:47.9190495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:47:47.9190844Z output = func(self, *args, **kwargs) 2025-08-14T21:47:47.9191232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 757, in forward 2025-08-14T21:47:47.9191626Z outputs = self.layoutlm( 2025-08-14T21:47:47.9191974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9192375Z return func(*args, **kwargs) 2025-08-14T21:47:47.9192722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9193102Z return func(*args, **kwargs) 2025-08-14T21:47:47.9193459Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:47:47.9193810Z output = func(self, *args, **kwargs) 2025-08-14T21:47:47.9194215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:47:47.9194624Z encoder_outputs = self.encoder( 2025-08-14T21:47:47.9194988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9195341Z return func(*args, **kwargs) 2025-08-14T21:47:47.9195694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9196056Z return func(*args, **kwargs) 2025-08-14T21:47:47.9196405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9196758Z return func(*args, **kwargs) 2025-08-14T21:47:47.9196953Z [Previous line repeated 1 more time] 2025-08-14T21:47:47.9197300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:47:47.9197648Z output = func(self, *args, **kwargs) 2025-08-14T21:47:47.9198048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:47:47.9198453Z layer_outputs = layer_module( 2025-08-14T21:47:47.9198804Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:47.9199164Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:47.9199547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9199923Z return func(*args, **kwargs) 2025-08-14T21:47:47.9200281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9200657Z return func(*args, **kwargs) 2025-08-14T21:47:47.9201022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9201399Z return func(*args, **kwargs) 2025-08-14T21:47:47.9201793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:47:47.9202231Z layer_output = apply_chunking_to_forward( 2025-08-14T21:47:47.9202632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:47:47.9203017Z return forward_fn(*input_tensors) 2025-08-14T21:47:47.9203442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:47:47.9203912Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:47:47.9204353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:47:47.9204779Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:47:47.9205156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:47:47.9205493Z return self.act(input) 2025-08-14T21:47:47.9205604Z 2025-08-14T21:47:47.9205693Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9205893Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9206101Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9206353Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9206552Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9206778Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9206983Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9207194Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9207401Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9207606Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9207801Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9208036Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:47.9208396Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:47.9208717Z return mod(**inputs) 2025-08-14T21:47:47.9209056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9209418Z return func(*args, **kwargs) 2025-08-14T21:47:47.9209773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9210125Z return func(*args, **kwargs) 2025-08-14T21:47:47.9210456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:47:47.9210806Z output = func(self, *args, **kwargs) 2025-08-14T21:47:47.9211201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 757, in forward 2025-08-14T21:47:47.9211586Z outputs = self.layoutlm( 2025-08-14T21:47:47.9211941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9212305Z return func(*args, **kwargs) 2025-08-14T21:47:47.9212657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9213014Z return func(*args, **kwargs) 2025-08-14T21:47:47.9213346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:47:47.9213698Z output = func(self, *args, **kwargs) 2025-08-14T21:47:47.9214086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:47:47.9214480Z encoder_outputs = self.encoder( 2025-08-14T21:47:47.9214846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9215208Z return func(*args, **kwargs) 2025-08-14T21:47:47.9215553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9215913Z return func(*args, **kwargs) 2025-08-14T21:47:47.9216266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9216621Z return func(*args, **kwargs) 2025-08-14T21:47:47.9216821Z [Previous line repeated 1 more time] 2025-08-14T21:47:47.9217173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:47:47.9217524Z output = func(self, *args, **kwargs) 2025-08-14T21:47:47.9217915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:47:47.9218306Z layer_outputs = layer_module( 2025-08-14T21:47:47.9218651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:47.9218999Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:47.9219379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9219846Z return func(*args, **kwargs) 2025-08-14T21:47:47.9220258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9220670Z return func(*args, **kwargs) 2025-08-14T21:47:47.9221060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9222341Z return func(*args, **kwargs) 2025-08-14T21:47:47.9222733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:47:47.9223178Z layer_output = apply_chunking_to_forward( 2025-08-14T21:47:47.9223615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:47:47.9224045Z return forward_fn(*input_tensors) 2025-08-14T21:47:47.9224500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:47:47.9225016Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:47:47.9225517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:47:47.9225991Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:47:47.9226389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:47:47.9226759Z return self.act(input) 2025-08-14T21:47:47.9226879Z 2025-08-14T21:47:47.9226975Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9227197Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9227421Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9227643Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9227866Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9228081Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9228303Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9228525Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9228739Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9228959Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9229181Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9229427Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:47.9229822Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:47.9230184Z return mod(**inputs) 2025-08-14T21:47:47.9230534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9230917Z return func(*args, **kwargs) 2025-08-14T21:47:47.9231283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9231668Z return func(*args, **kwargs) 2025-08-14T21:47:47.9232007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:47:47.9232374Z output = func(self, *args, **kwargs) 2025-08-14T21:47:47.9232787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 757, in forward 2025-08-14T21:47:47.9233197Z outputs = self.layoutlm( 2025-08-14T21:47:47.9233555Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9233931Z return func(*args, **kwargs) 2025-08-14T21:47:47.9234297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9234660Z return func(*args, **kwargs) 2025-08-14T21:47:47.9235002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:47:47.9235408Z output = func(self, *args, **kwargs) 2025-08-14T21:47:47.9235818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:47:47.9236235Z encoder_outputs = self.encoder( 2025-08-14T21:47:47.9236630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9237004Z return func(*args, **kwargs) 2025-08-14T21:47:47.9237358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9237730Z return func(*args, **kwargs) 2025-08-14T21:47:47.9238094Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9238464Z return func(*args, **kwargs) 2025-08-14T21:47:47.9238656Z [Previous line repeated 1 more time] 2025-08-14T21:47:47.9239013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:47:47.9239372Z output = func(self, *args, **kwargs) 2025-08-14T21:47:47.9239767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:47:47.9240181Z layer_outputs = layer_module( 2025-08-14T21:47:47.9240534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:47.9240898Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:47.9241271Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9241644Z return func(*args, **kwargs) 2025-08-14T21:47:47.9242234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9242606Z return func(*args, **kwargs) 2025-08-14T21:47:47.9242974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9243410Z return func(*args, **kwargs) 2025-08-14T21:47:47.9243806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:47:47.9244227Z layer_output = apply_chunking_to_forward( 2025-08-14T21:47:47.9244638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:47:47.9245044Z return forward_fn(*input_tensors) 2025-08-14T21:47:47.9245479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:47:47.9245960Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:47:47.9246419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:47:47.9246878Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:47:47.9247248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:47:47.9247589Z return self.act(input) 2025-08-14T21:47:47.9247707Z 2025-08-14T21:47:47.9247788Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9247999Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9248197Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9248400Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9248603Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9248797Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9248998Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9249201Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9249395Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9249720Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9249926Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9250198Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:47.9250556Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:47.9250911Z return mod(**inputs) 2025-08-14T21:47:47.9251270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9251631Z return func(*args, **kwargs) 2025-08-14T21:47:47.9252031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9252402Z return func(*args, **kwargs) 2025-08-14T21:47:47.9252735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:47:47.9253076Z output = func(self, *args, **kwargs) 2025-08-14T21:47:47.9253474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 757, in forward 2025-08-14T21:47:47.9253871Z outputs = self.layoutlm( 2025-08-14T21:47:47.9254218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9254592Z return func(*args, **kwargs) 2025-08-14T21:47:47.9254944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9255306Z return func(*args, **kwargs) 2025-08-14T21:47:47.9255628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:47:47.9255982Z output = func(self, *args, **kwargs) 2025-08-14T21:47:47.9256375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:47:47.9256765Z encoder_outputs = self.encoder( 2025-08-14T21:47:47.9257135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9257502Z return func(*args, **kwargs) 2025-08-14T21:47:47.9257853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9258203Z return func(*args, **kwargs) 2025-08-14T21:47:47.9258555Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9258919Z return func(*args, **kwargs) 2025-08-14T21:47:47.9259103Z [Previous line repeated 1 more time] 2025-08-14T21:47:47.9259454Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:47:47.9259889Z output = func(self, *args, **kwargs) 2025-08-14T21:47:47.9260301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:47:47.9260700Z layer_outputs = layer_module( 2025-08-14T21:47:47.9261054Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:47.9261425Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:47.9261794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9262150Z return func(*args, **kwargs) 2025-08-14T21:47:47.9262501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9262863Z return func(*args, **kwargs) 2025-08-14T21:47:47.9263208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9263573Z return func(*args, **kwargs) 2025-08-14T21:47:47.9264017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:47:47.9264490Z layer_output = apply_chunking_to_forward( 2025-08-14T21:47:47.9264901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:47:47.9265308Z return forward_fn(*input_tensors) 2025-08-14T21:47:47.9265743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:47:47.9266218Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:47:47.9266672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:47:47.9267117Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:47:47.9267503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:47:47.9267842Z return self.act(input) 2025-08-14T21:47:47.9267966Z 2025-08-14T21:47:47.9268048Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9268266Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9268487Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9268694Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9268902Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9269108Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9269308Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9269515Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9269721Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9269920Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9270127Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9270365Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:47.9270729Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:47.9271060Z return mod(**inputs) 2025-08-14T21:47:47.9271420Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9271792Z return func(*args, **kwargs) 2025-08-14T21:47:47.9272153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9272529Z return func(*args, **kwargs) 2025-08-14T21:47:47.9272873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:47:47.9273225Z output = func(self, *args, **kwargs) 2025-08-14T21:47:47.9273632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 757, in forward 2025-08-14T21:47:47.9274039Z outputs = self.layoutlm( 2025-08-14T21:47:47.9274406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9274770Z return func(*args, **kwargs) 2025-08-14T21:47:47.9275131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9275496Z return func(*args, **kwargs) 2025-08-14T21:47:47.9275809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:47:47.9276157Z output = func(self, *args, **kwargs) 2025-08-14T21:47:47.9276541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:47:47.9276924Z encoder_outputs = self.encoder( 2025-08-14T21:47:47.9277274Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9277626Z return func(*args, **kwargs) 2025-08-14T21:47:47.9278015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9278412Z return func(*args, **kwargs) 2025-08-14T21:47:47.9278755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9279137Z return func(*args, **kwargs) 2025-08-14T21:47:47.9279345Z [Previous line repeated 1 more time] 2025-08-14T21:47:47.9279682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:47:47.9280024Z output = func(self, *args, **kwargs) 2025-08-14T21:47:47.9280413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:47:47.9280803Z layer_outputs = layer_module( 2025-08-14T21:47:47.9281137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:47.9281487Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:47.9281852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9282202Z return func(*args, **kwargs) 2025-08-14T21:47:47.9282550Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9282907Z return func(*args, **kwargs) 2025-08-14T21:47:47.9283252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9283603Z return func(*args, **kwargs) 2025-08-14T21:47:47.9283978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:47:47.9284382Z layer_output = apply_chunking_to_forward( 2025-08-14T21:47:47.9284769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:47:47.9285152Z return forward_fn(*input_tensors) 2025-08-14T21:47:47.9285560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:47:47.9286023Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:47:47.9286443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:47:47.9286868Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:47:47.9287240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:47:47.9287569Z return self.act(input) 2025-08-14T21:47:47.9287677Z 2025-08-14T21:47:47.9287757Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9287970Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9288177Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9288375Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9288578Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9288782Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9288978Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9289181Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9289384Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9289586Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9289779Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9290009Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:47.9290373Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:47.9290684Z return mod(**inputs) 2025-08-14T21:47:47.9291071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9291435Z return func(*args, **kwargs) 2025-08-14T21:47:47.9291799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9292184Z return func(*args, **kwargs) 2025-08-14T21:47:47.9292522Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:47:47.9292873Z output = func(self, *args, **kwargs) 2025-08-14T21:47:47.9293263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 757, in forward 2025-08-14T21:47:47.9293662Z outputs = self.layoutlm( 2025-08-14T21:47:47.9294020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9294378Z return func(*args, **kwargs) 2025-08-14T21:47:47.9294738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9295105Z return func(*args, **kwargs) 2025-08-14T21:47:47.9295438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:47:47.9295783Z output = func(self, *args, **kwargs) 2025-08-14T21:47:47.9296181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:47:47.9296583Z encoder_outputs = self.encoder( 2025-08-14T21:47:47.9296954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9297311Z return func(*args, **kwargs) 2025-08-14T21:47:47.9297667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9298033Z return func(*args, **kwargs) 2025-08-14T21:47:47.9298385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9298762Z return func(*args, **kwargs) 2025-08-14T21:47:47.9298958Z [Previous line repeated 1 more time] 2025-08-14T21:47:47.9299307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:47:47.9299753Z output = func(self, *args, **kwargs) 2025-08-14T21:47:47.9300172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:47:47.9300584Z layer_outputs = layer_module( 2025-08-14T21:47:47.9300939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:47.9301306Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:47.9301684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9302057Z return func(*args, **kwargs) 2025-08-14T21:47:47.9302405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9302771Z return func(*args, **kwargs) 2025-08-14T21:47:47.9303124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9303478Z return func(*args, **kwargs) 2025-08-14T21:47:47.9303862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:47:47.9304274Z layer_output = apply_chunking_to_forward( 2025-08-14T21:47:47.9304673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:47:47.9305051Z return forward_fn(*input_tensors) 2025-08-14T21:47:47.9305521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:47:47.9306013Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:47:47.9306470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:47:47.9306904Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:47:47.9307287Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:47:47.9307624Z return self.act(input) 2025-08-14T21:47:47.9307746Z 2025-08-14T21:47:47.9307826Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9308034Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9308238Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9308438Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9308635Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9308835Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9309036Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9309229Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9309427Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9309629Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9309821Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9310051Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:47.9310405Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:47.9310715Z return mod(**inputs) 2025-08-14T21:47:47.9311057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9311416Z return func(*args, **kwargs) 2025-08-14T21:47:47.9311767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9312114Z return func(*args, **kwargs) 2025-08-14T21:47:47.9312451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:47:47.9312795Z output = func(self, *args, **kwargs) 2025-08-14T21:47:47.9313183Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 757, in forward 2025-08-14T21:47:47.9313571Z outputs = self.layoutlm( 2025-08-14T21:47:47.9313927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9314314Z return func(*args, **kwargs) 2025-08-14T21:47:47.9314679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9315061Z return func(*args, **kwargs) 2025-08-14T21:47:47.9315411Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:47:47.9315760Z output = func(self, *args, **kwargs) 2025-08-14T21:47:47.9316171Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:47:47.9316586Z encoder_outputs = self.encoder( 2025-08-14T21:47:47.9316966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9317342Z return func(*args, **kwargs) 2025-08-14T21:47:47.9317713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9318089Z return func(*args, **kwargs) 2025-08-14T21:47:47.9318458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9318829Z return func(*args, **kwargs) 2025-08-14T21:47:47.9319074Z [Previous line repeated 1 more time] 2025-08-14T21:47:47.9319425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:47:47.9319799Z output = func(self, *args, **kwargs) 2025-08-14T21:47:47.9320204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:47:47.9320585Z layer_outputs = layer_module( 2025-08-14T21:47:47.9320919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:47.9321255Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:47.9321615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9321965Z return func(*args, **kwargs) 2025-08-14T21:47:47.9322301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9322650Z return func(*args, **kwargs) 2025-08-14T21:47:47.9322990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9323340Z return func(*args, **kwargs) 2025-08-14T21:47:47.9323701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:47:47.9324096Z layer_output = apply_chunking_to_forward( 2025-08-14T21:47:47.9324481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:47:47.9324857Z return forward_fn(*input_tensors) 2025-08-14T21:47:47.9325256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:47:47.9325710Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:47:47.9326136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:47:47.9326544Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:47:47.9326909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:47:47.9327231Z return self.act(input) 2025-08-14T21:47:47.9327338Z 2025-08-14T21:47:47.9327423Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9327620Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9327821Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9328018Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9328206Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9328405Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9328605Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9328795Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9328996Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9329191Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9329388Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9329610Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:47.9329964Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:47.9330291Z return mod(**inputs) 2025-08-14T21:47:47.9330621Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9330974Z return func(*args, **kwargs) 2025-08-14T21:47:47.9331323Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9331678Z return func(*args, **kwargs) 2025-08-14T21:47:47.9331985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:47:47.9332369Z output = func(self, *args, **kwargs) 2025-08-14T21:47:47.9332764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 757, in forward 2025-08-14T21:47:47.9333152Z outputs = self.layoutlm( 2025-08-14T21:47:47.9333489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9333836Z return func(*args, **kwargs) 2025-08-14T21:47:47.9334175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9334517Z return func(*args, **kwargs) 2025-08-14T21:47:47.9334835Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:47:47.9335171Z output = func(self, *args, **kwargs) 2025-08-14T21:47:47.9335545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:47:47.9335927Z encoder_outputs = self.encoder( 2025-08-14T21:47:47.9336281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9336629Z return func(*args, **kwargs) 2025-08-14T21:47:47.9336961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9337306Z return func(*args, **kwargs) 2025-08-14T21:47:47.9337645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9337995Z return func(*args, **kwargs) 2025-08-14T21:47:47.9338184Z [Previous line repeated 1 more time] 2025-08-14T21:47:47.9338525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:47:47.9338870Z output = func(self, *args, **kwargs) 2025-08-14T21:47:47.9339253Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:47:47.9339730Z layer_outputs = layer_module( 2025-08-14T21:47:47.9340071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:47.9340412Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:47.9340799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9341174Z return func(*args, **kwargs) 2025-08-14T21:47:47.9341536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9342113Z return func(*args, **kwargs) 2025-08-14T21:47:47.9342487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9342850Z return func(*args, **kwargs) 2025-08-14T21:47:47.9343230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:47:47.9343636Z layer_output = apply_chunking_to_forward( 2025-08-14T21:47:47.9344035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:47:47.9344426Z return forward_fn(*input_tensors) 2025-08-14T21:47:47.9344837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:47:47.9345299Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:47:47.9345727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:47:47.9346232Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:47:47.9346593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:47:47.9346952Z return self.act(input) 2025-08-14T21:47:47.9347060Z 2025-08-14T21:47:47.9347179Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9347378Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9347583Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9347786Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9347988Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9348179Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9348380Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9348639Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9348837Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9349027Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9349225Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9349452Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:47.9349796Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:47.9350114Z return mod(**inputs) 2025-08-14T21:47:47.9350453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9350810Z return func(*args, **kwargs) 2025-08-14T21:47:47.9351150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9351505Z return func(*args, **kwargs) 2025-08-14T21:47:47.9351828Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:47:47.9352160Z output = func(self, *args, **kwargs) 2025-08-14T21:47:47.9352544Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 757, in forward 2025-08-14T21:47:47.9352931Z outputs = self.layoutlm( 2025-08-14T21:47:47.9353278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9353627Z return func(*args, **kwargs) 2025-08-14T21:47:47.9353973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9354324Z return func(*args, **kwargs) 2025-08-14T21:47:47.9354639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:47:47.9354978Z output = func(self, *args, **kwargs) 2025-08-14T21:47:47.9355363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:47:47.9355752Z encoder_outputs = self.encoder( 2025-08-14T21:47:47.9356104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9356458Z return func(*args, **kwargs) 2025-08-14T21:47:47.9356804Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9357157Z return func(*args, **kwargs) 2025-08-14T21:47:47.9357492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9357842Z return func(*args, **kwargs) 2025-08-14T21:47:47.9358030Z [Previous line repeated 1 more time] 2025-08-14T21:47:47.9358356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:47:47.9358692Z output = func(self, *args, **kwargs) 2025-08-14T21:47:47.9359086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:47:47.9359513Z layer_outputs = layer_module( 2025-08-14T21:47:47.9359840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:47.9360207Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:47.9360590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9360934Z return func(*args, **kwargs) 2025-08-14T21:47:47.9361278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9361628Z return func(*args, **kwargs) 2025-08-14T21:47:47.9361971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9362314Z return func(*args, **kwargs) 2025-08-14T21:47:47.9362682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:47:47.9363081Z layer_output = apply_chunking_to_forward( 2025-08-14T21:47:47.9363459Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:47:47.9363833Z return forward_fn(*input_tensors) 2025-08-14T21:47:47.9364239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:47:47.9364693Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:47:47.9365110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:47:47.9365533Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:47:47.9365894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:47:47.9366220Z return self.act(input) 2025-08-14T21:47:47.9366328Z 2025-08-14T21:47:47.9366404Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9366609Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9366808Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9367001Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9367200Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9367395Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9367586Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9367779Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9367977Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9368164Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9368358Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9368585Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:47.9368932Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:47.9369236Z return mod(**inputs) 2025-08-14T21:47:47.9369573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9369931Z return func(*args, **kwargs) 2025-08-14T21:47:47.9370267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9370617Z return func(*args, **kwargs) 2025-08-14T21:47:47.9370939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:47:47.9371273Z output = func(self, *args, **kwargs) 2025-08-14T21:47:47.9371647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 757, in forward 2025-08-14T21:47:47.9372033Z outputs = self.layoutlm( 2025-08-14T21:47:47.9372373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9372771Z return func(*args, **kwargs) 2025-08-14T21:47:47.9373118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9373487Z return func(*args, **kwargs) 2025-08-14T21:47:47.9373835Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:47:47.9374169Z output = func(self, *args, **kwargs) 2025-08-14T21:47:47.9374556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:47:47.9374945Z encoder_outputs = self.encoder( 2025-08-14T21:47:47.9375307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9375662Z return func(*args, **kwargs) 2025-08-14T21:47:47.9376019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9376390Z return func(*args, **kwargs) 2025-08-14T21:47:47.9376738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9377103Z return func(*args, **kwargs) 2025-08-14T21:47:47.9377296Z [Previous line repeated 1 more time] 2025-08-14T21:47:47.9377641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:47:47.9377981Z output = func(self, *args, **kwargs) 2025-08-14T21:47:47.9378379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:47:47.9378777Z layer_outputs = layer_module( 2025-08-14T21:47:47.9379115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:47:47.9379557Z return super().__call__(*args, **kwargs) 2025-08-14T21:47:47.9379957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9380335Z return func(*args, **kwargs) 2025-08-14T21:47:47.9380733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9381098Z return func(*args, **kwargs) 2025-08-14T21:47:47.9381458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9381815Z return func(*args, **kwargs) 2025-08-14T21:47:47.9382201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:47:47.9382613Z layer_output = apply_chunking_to_forward( 2025-08-14T21:47:47.9383016Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:47:47.9383400Z return forward_fn(*input_tensors) 2025-08-14T21:47:47.9383829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:47:47.9384304Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:47:47.9384745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:47:47.9385169Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:47:47.9385549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:47:47.9385883Z return self.act(input) 2025-08-14T21:47:47.9385991Z 2025-08-14T21:47:47.9386069Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9386280Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9386484Z cudagraph partition due to non gpu ops 2025-08-14T21:47:47.9386782Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:47:47.9387152Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:47:47.9387473Z return mod(**inputs) 2025-08-14T21:47:47.9387844Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9388210Z return func(*args, **kwargs) 2025-08-14T21:47:47.9388560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:47:47.9388923Z return func(*args, **kwargs) 2025-08-14T21:47:47.9389253Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:47:47.9389591Z output = func(self, *args, **kwargs) 2025-08-14T21:47:47.9389988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 776, in forward 2025-08-14T21:47:47.9390379Z masked_lm_loss = loss_fct( 2025-08-14T21:47:47.9390499Z 2025-08-14T21:47:56.6370539Z Compilation time (from dynamo_timed): 18.231165897 2025-08-14T21:47:56.6421099Z pass 2025-08-14T21:47:56.6421483Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:47:56.6422337Z TIMING: _recursive_pre_grad_passes:0.04604 _recursive_joint_graph_passes:0.49054 _recursive_post_grad_passes:0.0792 async_compile.wait:0.70105 code_gen:8.08208 inductor_compile:10.17859 backend_compile:15.06975 gc:0.00027 entire_frame_compile:18.23117 total_wall_time:18.23117 2025-08-14T21:47:56.6423500Z STATS: call_* op count: 432 | FakeTensorMode.__torch_dispatch__:27394 | FakeTensor.__torch_dispatch__:3961 | ProxyTorchDispatchMode.__torch_dispatch__:6668 2025-08-14T21:47:56.6426544Z Dynamo produced 1 graphs covering 432 ops with 0 graph breaks (0 unique) 2025-08-14T21:48:02.5969965Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:48:02.5970940Z from pkg_resources import resource_filename 2025-08-14T21:48:03.6981620Z 2025-08-14T21:48:04.8957982Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:48:04.8962412Z loading model: 0it [00:01, ?it/s] 2025-08-14T21:48:04.8974548Z cpu eval LayoutLMForSequenceClassification 2025-08-14T21:48:05.4783762Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:48:05.6913621Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:48:05.9215495Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:48:16.6157400Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6162025Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6164146Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6164511Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6167818Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6168177Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6173114Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6173420Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6173666Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6174020Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6174393Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:16.6174943Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:16.6175318Z return mod(**inputs) 2025-08-14T21:48:16.6176696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:16.6177246Z output = func(self, *args, **kwargs) 2025-08-14T21:48:16.6177803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:48:16.6178329Z outputs = self.layoutlm( 2025-08-14T21:48:16.6178741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6179152Z return func(*args, **kwargs) 2025-08-14T21:48:16.6179709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6180134Z return func(*args, **kwargs) 2025-08-14T21:48:16.6180527Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:16.6180937Z output = func(self, *args, **kwargs) 2025-08-14T21:48:16.6181398Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:16.6181856Z encoder_outputs = self.encoder( 2025-08-14T21:48:16.6182280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6182699Z return func(*args, **kwargs) 2025-08-14T21:48:16.6183088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6183496Z return func(*args, **kwargs) 2025-08-14T21:48:16.6183913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6184345Z return func(*args, **kwargs) 2025-08-14T21:48:16.6184573Z [Previous line repeated 1 more time] 2025-08-14T21:48:16.6184968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:16.6185364Z output = func(self, *args, **kwargs) 2025-08-14T21:48:16.6185785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:16.6186226Z layer_outputs = layer_module( 2025-08-14T21:48:16.6186605Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:16.6187013Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:16.6187482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6187882Z return func(*args, **kwargs) 2025-08-14T21:48:16.6188249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6188620Z return func(*args, **kwargs) 2025-08-14T21:48:16.6188982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6189381Z return func(*args, **kwargs) 2025-08-14T21:48:16.6189810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:16.6190267Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:16.6190707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:16.6191133Z return forward_fn(*input_tensors) 2025-08-14T21:48:16.6191588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:16.6192103Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:16.6192586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:16.6193131Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:16.6193538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:16.6193936Z return self.act(input) 2025-08-14T21:48:16.6194096Z 2025-08-14T21:48:16.6194196Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6194428Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6194644Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6194862Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6195081Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6195427Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6195650Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6195872Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6196086Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6196312Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6196537Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6196804Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:16.6197196Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:16.6197554Z return mod(**inputs) 2025-08-14T21:48:16.6197917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:16.6198298Z output = func(self, *args, **kwargs) 2025-08-14T21:48:16.6198737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:48:16.6199164Z outputs = self.layoutlm( 2025-08-14T21:48:16.6199551Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6199939Z return func(*args, **kwargs) 2025-08-14T21:48:16.6200338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6200731Z return func(*args, **kwargs) 2025-08-14T21:48:16.6201082Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:16.6201469Z output = func(self, *args, **kwargs) 2025-08-14T21:48:16.6201898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:16.6202329Z encoder_outputs = self.encoder( 2025-08-14T21:48:16.6202720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6203120Z return func(*args, **kwargs) 2025-08-14T21:48:16.6203501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6203899Z return func(*args, **kwargs) 2025-08-14T21:48:16.6204281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6204671Z return func(*args, **kwargs) 2025-08-14T21:48:16.6204878Z [Previous line repeated 1 more time] 2025-08-14T21:48:16.6205244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:16.6205618Z output = func(self, *args, **kwargs) 2025-08-14T21:48:16.6206040Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:16.6206460Z layer_outputs = layer_module( 2025-08-14T21:48:16.6206834Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:16.6207219Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:16.6207598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6207994Z return func(*args, **kwargs) 2025-08-14T21:48:16.6208368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6208756Z return func(*args, **kwargs) 2025-08-14T21:48:16.6209124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6209494Z return func(*args, **kwargs) 2025-08-14T21:48:16.6209884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:16.6210299Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:16.6210699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:16.6211100Z return forward_fn(*input_tensors) 2025-08-14T21:48:16.6211535Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:16.6212022Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:16.6212470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:16.6212925Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:16.6213313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:16.6213650Z return self.act(input) 2025-08-14T21:48:16.6213774Z 2025-08-14T21:48:16.6213855Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6214070Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6214278Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6214479Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6214688Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6214901Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6215099Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6215308Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6215514Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6215747Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6215956Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6216188Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:16.6216556Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:16.6216899Z return mod(**inputs) 2025-08-14T21:48:16.6217246Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:16.6217628Z output = func(self, *args, **kwargs) 2025-08-14T21:48:16.6218056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:48:16.6218487Z outputs = self.layoutlm( 2025-08-14T21:48:16.6218863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6219262Z return func(*args, **kwargs) 2025-08-14T21:48:16.6219812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6220233Z return func(*args, **kwargs) 2025-08-14T21:48:16.6220596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:16.6220985Z output = func(self, *args, **kwargs) 2025-08-14T21:48:16.6221413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:16.6221847Z encoder_outputs = self.encoder( 2025-08-14T21:48:16.6222267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6222658Z return func(*args, **kwargs) 2025-08-14T21:48:16.6223066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6223457Z return func(*args, **kwargs) 2025-08-14T21:48:16.6223824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6224203Z return func(*args, **kwargs) 2025-08-14T21:48:16.6224400Z [Previous line repeated 1 more time] 2025-08-14T21:48:16.6224760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:16.6225119Z output = func(self, *args, **kwargs) 2025-08-14T21:48:16.6225528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:16.6225930Z layer_outputs = layer_module( 2025-08-14T21:48:16.6226294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:16.6226675Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:16.6227057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6227438Z return func(*args, **kwargs) 2025-08-14T21:48:16.6227922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6228304Z return func(*args, **kwargs) 2025-08-14T21:48:16.6228661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6229034Z return func(*args, **kwargs) 2025-08-14T21:48:16.6229430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:16.6229850Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:16.6230266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:16.6230669Z return forward_fn(*input_tensors) 2025-08-14T21:48:16.6231110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:16.6231587Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:16.6232040Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:16.6232480Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:16.6232871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:16.6233224Z return self.act(input) 2025-08-14T21:48:16.6233343Z 2025-08-14T21:48:16.6233425Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6233636Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6233834Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6234037Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6234242Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6234443Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6234639Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6234842Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6235043Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6235240Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6235440Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6235672Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:16.6236025Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:16.6236354Z return mod(**inputs) 2025-08-14T21:48:16.6236714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:16.6237096Z output = func(self, *args, **kwargs) 2025-08-14T21:48:16.6237488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:48:16.6237897Z outputs = self.layoutlm( 2025-08-14T21:48:16.6238247Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6238599Z return func(*args, **kwargs) 2025-08-14T21:48:16.6238949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6239304Z return func(*args, **kwargs) 2025-08-14T21:48:16.6239632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:16.6239974Z output = func(self, *args, **kwargs) 2025-08-14T21:48:16.6240372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:16.6240766Z encoder_outputs = self.encoder( 2025-08-14T21:48:16.6241123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6241485Z return func(*args, **kwargs) 2025-08-14T21:48:16.6242034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6242408Z return func(*args, **kwargs) 2025-08-14T21:48:16.6242765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6243140Z return func(*args, **kwargs) 2025-08-14T21:48:16.6243343Z [Previous line repeated 1 more time] 2025-08-14T21:48:16.6243698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:16.6244056Z output = func(self, *args, **kwargs) 2025-08-14T21:48:16.6244474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:16.6244880Z layer_outputs = layer_module( 2025-08-14T21:48:16.6245228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:16.6245606Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:16.6245982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6246336Z return func(*args, **kwargs) 2025-08-14T21:48:16.6246687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6247048Z return func(*args, **kwargs) 2025-08-14T21:48:16.6247408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6247765Z return func(*args, **kwargs) 2025-08-14T21:48:16.6248155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:16.6248597Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:16.6248991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:16.6249373Z return forward_fn(*input_tensors) 2025-08-14T21:48:16.6249801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:16.6250295Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:16.6250820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:16.6251263Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:16.6251674Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:16.6252056Z return self.act(input) 2025-08-14T21:48:16.6252169Z 2025-08-14T21:48:16.6252249Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6252458Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6252664Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6252867Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6253074Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6253281Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6253488Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6253689Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6253897Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6254114Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6254312Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6254551Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:16.6254918Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:16.6255241Z return mod(**inputs) 2025-08-14T21:48:16.6255573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:16.6255927Z output = func(self, *args, **kwargs) 2025-08-14T21:48:16.6256327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:48:16.6256719Z outputs = self.layoutlm( 2025-08-14T21:48:16.6257077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6257453Z return func(*args, **kwargs) 2025-08-14T21:48:16.6257808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6258182Z return func(*args, **kwargs) 2025-08-14T21:48:16.6258522Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:16.6258880Z output = func(self, *args, **kwargs) 2025-08-14T21:48:16.6259277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:16.6259767Z encoder_outputs = self.encoder( 2025-08-14T21:48:16.6260155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6260559Z return func(*args, **kwargs) 2025-08-14T21:48:16.6260961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6261380Z return func(*args, **kwargs) 2025-08-14T21:48:16.6261767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6262144Z return func(*args, **kwargs) 2025-08-14T21:48:16.6262345Z [Previous line repeated 1 more time] 2025-08-14T21:48:16.6262713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:16.6263067Z output = func(self, *args, **kwargs) 2025-08-14T21:48:16.6263484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:16.6263904Z layer_outputs = layer_module( 2025-08-14T21:48:16.6264261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:16.6264629Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:16.6265054Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6265431Z return func(*args, **kwargs) 2025-08-14T21:48:16.6265807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6266196Z return func(*args, **kwargs) 2025-08-14T21:48:16.6266555Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6266920Z return func(*args, **kwargs) 2025-08-14T21:48:16.6267297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:16.6267725Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:16.6268128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:16.6268526Z return forward_fn(*input_tensors) 2025-08-14T21:48:16.6268948Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:16.6269427Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:16.6269876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:16.6270304Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:16.6270689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:16.6271019Z return self.act(input) 2025-08-14T21:48:16.6271126Z 2025-08-14T21:48:16.6271213Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6271411Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6271619Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6271818Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6272010Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6272213Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6272412Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6272603Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6272805Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6273001Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6273197Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6273421Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:16.6273773Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:16.6274090Z return mod(**inputs) 2025-08-14T21:48:16.6274405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:16.6274751Z output = func(self, *args, **kwargs) 2025-08-14T21:48:16.6275146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:48:16.6275535Z outputs = self.layoutlm( 2025-08-14T21:48:16.6275882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6276245Z return func(*args, **kwargs) 2025-08-14T21:48:16.6276596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6276949Z return func(*args, **kwargs) 2025-08-14T21:48:16.6277281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:16.6277629Z output = func(self, *args, **kwargs) 2025-08-14T21:48:16.6278017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:16.6278404Z encoder_outputs = self.encoder( 2025-08-14T21:48:16.6278817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6279197Z return func(*args, **kwargs) 2025-08-14T21:48:16.6279546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6279931Z return func(*args, **kwargs) 2025-08-14T21:48:16.6280288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6280648Z return func(*args, **kwargs) 2025-08-14T21:48:16.6280850Z [Previous line repeated 1 more time] 2025-08-14T21:48:16.6281196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:16.6281549Z output = func(self, *args, **kwargs) 2025-08-14T21:48:16.6281934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:16.6282325Z layer_outputs = layer_module( 2025-08-14T21:48:16.6282668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:16.6283021Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:16.6283385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6283744Z return func(*args, **kwargs) 2025-08-14T21:48:16.6284093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6284449Z return func(*args, **kwargs) 2025-08-14T21:48:16.6284788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6285143Z return func(*args, **kwargs) 2025-08-14T21:48:16.6285524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:16.6285923Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:16.6286322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:16.6286710Z return forward_fn(*input_tensors) 2025-08-14T21:48:16.6287126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:16.6287587Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:16.6288019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:16.6288445Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:16.6288818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:16.6289147Z return self.act(input) 2025-08-14T21:48:16.6289264Z 2025-08-14T21:48:16.6289345Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6289550Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6289745Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6289948Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6290147Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6290339Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6290540Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6290740Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6290933Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6291133Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6291333Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6291562Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:16.6291956Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:16.6292286Z return mod(**inputs) 2025-08-14T21:48:16.6292615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:16.6292969Z output = func(self, *args, **kwargs) 2025-08-14T21:48:16.6293377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:48:16.6293766Z outputs = self.layoutlm( 2025-08-14T21:48:16.6294120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6294482Z return func(*args, **kwargs) 2025-08-14T21:48:16.6294843Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6295213Z return func(*args, **kwargs) 2025-08-14T21:48:16.6295547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:16.6295913Z output = func(self, *args, **kwargs) 2025-08-14T21:48:16.6296312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:16.6296714Z encoder_outputs = self.encoder( 2025-08-14T21:48:16.6297077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6297445Z return func(*args, **kwargs) 2025-08-14T21:48:16.6297804Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6298171Z return func(*args, **kwargs) 2025-08-14T21:48:16.6298525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6298892Z return func(*args, **kwargs) 2025-08-14T21:48:16.6299093Z [Previous line repeated 1 more time] 2025-08-14T21:48:16.6299529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:16.6299932Z output = func(self, *args, **kwargs) 2025-08-14T21:48:16.6300386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:16.6300832Z layer_outputs = layer_module( 2025-08-14T21:48:16.6301196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:16.6301576Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:16.6301971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6302334Z return func(*args, **kwargs) 2025-08-14T21:48:16.6302696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6303065Z return func(*args, **kwargs) 2025-08-14T21:48:16.6303430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6303789Z return func(*args, **kwargs) 2025-08-14T21:48:16.6304179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:16.6304590Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:16.6304987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:16.6305384Z return forward_fn(*input_tensors) 2025-08-14T21:48:16.6305813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:16.6306291Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:16.6306777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:16.6307247Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:16.6307692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:16.6308033Z return self.act(input) 2025-08-14T21:48:16.6308148Z 2025-08-14T21:48:16.6308231Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6308446Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6308658Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6308866Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6309084Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6309297Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6309499Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6309707Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6309919Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6310118Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6310329Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6310570Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:16.6310937Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:16.6311259Z return mod(**inputs) 2025-08-14T21:48:16.6311594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:16.6311953Z output = func(self, *args, **kwargs) 2025-08-14T21:48:16.6312348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:48:16.6312749Z outputs = self.layoutlm( 2025-08-14T21:48:16.6313114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6313494Z return func(*args, **kwargs) 2025-08-14T21:48:16.6313846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6314218Z return func(*args, **kwargs) 2025-08-14T21:48:16.6314557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:16.6314910Z output = func(self, *args, **kwargs) 2025-08-14T21:48:16.6315316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:16.6315715Z encoder_outputs = self.encoder( 2025-08-14T21:48:16.6316092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6316511Z return func(*args, **kwargs) 2025-08-14T21:48:16.6316873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6317243Z return func(*args, **kwargs) 2025-08-14T21:48:16.6317609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6317976Z return func(*args, **kwargs) 2025-08-14T21:48:16.6318174Z [Previous line repeated 1 more time] 2025-08-14T21:48:16.6318529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:16.6318875Z output = func(self, *args, **kwargs) 2025-08-14T21:48:16.6319278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:16.6319685Z layer_outputs = layer_module( 2025-08-14T21:48:16.6320032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:16.6320437Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:16.6320804Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6321180Z return func(*args, **kwargs) 2025-08-14T21:48:16.6321546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6321905Z return func(*args, **kwargs) 2025-08-14T21:48:16.6322257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6322618Z return func(*args, **kwargs) 2025-08-14T21:48:16.6322992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:16.6323399Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:16.6323801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:16.6324183Z return forward_fn(*input_tensors) 2025-08-14T21:48:16.6324605Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:16.6325080Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:16.6325525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:16.6325964Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:16.6326337Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:16.6326676Z return self.act(input) 2025-08-14T21:48:16.6326787Z 2025-08-14T21:48:16.6326872Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6327078Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6327283Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6327491Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6327688Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6327892Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6328096Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6328295Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6328496Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6328699Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6328894Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6329126Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:16.6329483Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:16.6329809Z return mod(**inputs) 2025-08-14T21:48:16.6330129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:16.6330481Z output = func(self, *args, **kwargs) 2025-08-14T21:48:16.6330887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:48:16.6331276Z outputs = self.layoutlm( 2025-08-14T21:48:16.6331630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6331998Z return func(*args, **kwargs) 2025-08-14T21:48:16.6332355Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6332713Z return func(*args, **kwargs) 2025-08-14T21:48:16.6333049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:16.6333401Z output = func(self, *args, **kwargs) 2025-08-14T21:48:16.6333794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:16.6334276Z encoder_outputs = self.encoder( 2025-08-14T21:48:16.6334639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6335013Z return func(*args, **kwargs) 2025-08-14T21:48:16.6335374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6335732Z return func(*args, **kwargs) 2025-08-14T21:48:16.6336077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6336429Z return func(*args, **kwargs) 2025-08-14T21:48:16.6336615Z [Previous line repeated 1 more time] 2025-08-14T21:48:16.6336958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:16.6337299Z output = func(self, *args, **kwargs) 2025-08-14T21:48:16.6337685Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:16.6338081Z layer_outputs = layer_module( 2025-08-14T21:48:16.6338422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:16.6338778Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:16.6339143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6339570Z return func(*args, **kwargs) 2025-08-14T21:48:16.6339955Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6340348Z return func(*args, **kwargs) 2025-08-14T21:48:16.6340739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6341137Z return func(*args, **kwargs) 2025-08-14T21:48:16.6341547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:16.6342107Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:16.6342535Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:16.6342959Z return forward_fn(*input_tensors) 2025-08-14T21:48:16.6343406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:16.6343919Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:16.6344399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:16.6344872Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:16.6345273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:16.6345647Z return self.act(input) 2025-08-14T21:48:16.6345786Z 2025-08-14T21:48:16.6345872Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6346097Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6346310Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6346529Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6346745Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6346953Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6347169Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6347384Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6347591Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6347810Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6348023Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6348266Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:16.6348736Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:16.6349116Z return mod(**inputs) 2025-08-14T21:48:16.6349472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:16.6349881Z output = func(self, *args, **kwargs) 2025-08-14T21:48:16.6350296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:48:16.6350785Z outputs = self.layoutlm( 2025-08-14T21:48:16.6351137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6351502Z return func(*args, **kwargs) 2025-08-14T21:48:16.6351858Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6352221Z return func(*args, **kwargs) 2025-08-14T21:48:16.6352555Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:16.6352918Z output = func(self, *args, **kwargs) 2025-08-14T21:48:16.6353312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:16.6353708Z encoder_outputs = self.encoder( 2025-08-14T21:48:16.6354064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6354430Z return func(*args, **kwargs) 2025-08-14T21:48:16.6354784Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6355141Z return func(*args, **kwargs) 2025-08-14T21:48:16.6355493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6355860Z return func(*args, **kwargs) 2025-08-14T21:48:16.6356055Z [Previous line repeated 1 more time] 2025-08-14T21:48:16.6356399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:16.6356753Z output = func(self, *args, **kwargs) 2025-08-14T21:48:16.6357148Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:16.6357534Z layer_outputs = layer_module( 2025-08-14T21:48:16.6357881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:16.6358237Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:16.6358611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6358968Z return func(*args, **kwargs) 2025-08-14T21:48:16.6359327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6359693Z return func(*args, **kwargs) 2025-08-14T21:48:16.6360037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6360403Z return func(*args, **kwargs) 2025-08-14T21:48:16.6360789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:16.6361197Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:16.6361589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:16.6361982Z return forward_fn(*input_tensors) 2025-08-14T21:48:16.6362402Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:16.6363002Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:16.6363438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:16.6363878Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:16.6364242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:16.6364561Z return self.act(input) 2025-08-14T21:48:16.6364678Z 2025-08-14T21:48:16.6364757Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6364966Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6365172Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6365369Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6365574Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6365777Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6365974Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6366176Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6366374Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6366578Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6366771Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6367000Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:16.6367341Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:16.6367654Z return mod(**inputs) 2025-08-14T21:48:16.6367974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:16.6368319Z output = func(self, *args, **kwargs) 2025-08-14T21:48:16.6368705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:48:16.6369095Z outputs = self.layoutlm( 2025-08-14T21:48:16.6369452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6369817Z return func(*args, **kwargs) 2025-08-14T21:48:16.6370161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6370522Z return func(*args, **kwargs) 2025-08-14T21:48:16.6370855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:16.6371192Z output = func(self, *args, **kwargs) 2025-08-14T21:48:16.6371594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:16.6371996Z encoder_outputs = self.encoder( 2025-08-14T21:48:16.6372370Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6372742Z return func(*args, **kwargs) 2025-08-14T21:48:16.6373107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6373476Z return func(*args, **kwargs) 2025-08-14T21:48:16.6373826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6374195Z return func(*args, **kwargs) 2025-08-14T21:48:16.6374398Z [Previous line repeated 1 more time] 2025-08-14T21:48:16.6374750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:16.6375100Z output = func(self, *args, **kwargs) 2025-08-14T21:48:16.6375500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:16.6375901Z layer_outputs = layer_module( 2025-08-14T21:48:16.6376284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:16.6376661Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:16.6377053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6377453Z return func(*args, **kwargs) 2025-08-14T21:48:16.6377804Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6378173Z return func(*args, **kwargs) 2025-08-14T21:48:16.6378530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6378889Z return func(*args, **kwargs) 2025-08-14T21:48:16.6379291Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:16.6379819Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:16.6380275Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:16.6380672Z return forward_fn(*input_tensors) 2025-08-14T21:48:16.6381124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:16.6381649Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:16.6382140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:16.6382593Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:16.6382982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:16.6383329Z return self.act(input) 2025-08-14T21:48:16.6383443Z 2025-08-14T21:48:16.6383525Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6383746Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6383957Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6384166Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6384364Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6384574Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6384779Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6384977Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6385184Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6385389Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6385589Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6385830Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:16.6386201Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:16.6386535Z return mod(**inputs) 2025-08-14T21:48:16.6386863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:16.6387222Z output = func(self, *args, **kwargs) 2025-08-14T21:48:16.6387630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:48:16.6388028Z outputs = self.layoutlm( 2025-08-14T21:48:16.6388391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6388768Z return func(*args, **kwargs) 2025-08-14T21:48:16.6389137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6389512Z return func(*args, **kwargs) 2025-08-14T21:48:16.6389859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:16.6390222Z output = func(self, *args, **kwargs) 2025-08-14T21:48:16.6390675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:16.6391100Z encoder_outputs = self.encoder( 2025-08-14T21:48:16.6391475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6391865Z return func(*args, **kwargs) 2025-08-14T21:48:16.6392216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6392581Z return func(*args, **kwargs) 2025-08-14T21:48:16.6392940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6393297Z return func(*args, **kwargs) 2025-08-14T21:48:16.6393500Z [Previous line repeated 1 more time] 2025-08-14T21:48:16.6393867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:16.6394209Z output = func(self, *args, **kwargs) 2025-08-14T21:48:16.6394592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:16.6394980Z layer_outputs = layer_module( 2025-08-14T21:48:16.6395321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:16.6395665Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:16.6396029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6396386Z return func(*args, **kwargs) 2025-08-14T21:48:16.6396735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6397083Z return func(*args, **kwargs) 2025-08-14T21:48:16.6397435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6397792Z return func(*args, **kwargs) 2025-08-14T21:48:16.6398159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:16.6398560Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:16.6398951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:16.6399332Z return forward_fn(*input_tensors) 2025-08-14T21:48:16.6399742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:16.6400211Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:16.6400642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:16.6401091Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:16.6401450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:16.6401781Z return self.act(input) 2025-08-14T21:48:16.6401890Z 2025-08-14T21:48:16.6401976Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6402174Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6402408Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:16.6402812Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:16.6403132Z return mod(**inputs) 2025-08-14T21:48:16.6403443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:16.6403792Z output = func(self, *args, **kwargs) 2025-08-14T21:48:16.6404234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:48:16.6404626Z outputs = self.layoutlm( 2025-08-14T21:48:16.6404991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6405357Z return func(*args, **kwargs) 2025-08-14T21:48:16.6405734Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:16.6406095Z return func(*args, **kwargs) 2025-08-14T21:48:16.6406415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:16.6406761Z output = func(self, *args, **kwargs) 2025-08-14T21:48:16.6407157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 654, in forward 2025-08-14T21:48:16.6407558Z pooled_output = self.pooler(sequence_output) 2025-08-14T21:48:16.6407973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 431, in forward 2025-08-14T21:48:16.6408385Z pooled_output = self.activation(pooled_output) 2025-08-14T21:48:16.6408534Z 2025-08-14T21:48:16.6408621Z cudagraph partition due to non gpu ops 2025-08-14T21:48:16.6408848Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:16.6409201Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:16.6409538Z return mod(**inputs) 2025-08-14T21:48:16.6409850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:16.6410194Z output = func(self, *args, **kwargs) 2025-08-14T21:48:16.6410589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 911, in forward 2025-08-14T21:48:16.6411043Z loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) 2025-08-14T21:48:16.6411220Z 2025-08-14T21:48:16.6411325Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:16.6411679Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:16.6412000Z return mod(**inputs) 2025-08-14T21:48:16.6412323Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:16.6412668Z output = func(self, *args, **kwargs) 2025-08-14T21:48:16.6413060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 911, in forward 2025-08-14T21:48:16.6413500Z loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) 2025-08-14T21:48:16.6413673Z 2025-08-14T21:48:33.6415157Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6415694Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6416019Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6416360Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6416692Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6416935Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6417165Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6417417Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6417750Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6417974Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6418237Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:33.6418670Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:33.6419041Z return mod(**inputs) 2025-08-14T21:48:33.6419432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:33.6420034Z output = func(self, *args, **kwargs) 2025-08-14T21:48:33.6420905Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:48:33.6421383Z outputs = self.layoutlm( 2025-08-14T21:48:33.6421855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6422338Z return func(*args, **kwargs) 2025-08-14T21:48:33.6425539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6425930Z return func(*args, **kwargs) 2025-08-14T21:48:33.6426279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:33.6426634Z output = func(self, *args, **kwargs) 2025-08-14T21:48:33.6427040Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:33.6427439Z encoder_outputs = self.encoder( 2025-08-14T21:48:33.6427816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6428188Z return func(*args, **kwargs) 2025-08-14T21:48:33.6428538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6428904Z return func(*args, **kwargs) 2025-08-14T21:48:33.6429283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6429634Z return func(*args, **kwargs) 2025-08-14T21:48:33.6429829Z [Previous line repeated 1 more time] 2025-08-14T21:48:33.6430180Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:33.6430530Z output = func(self, *args, **kwargs) 2025-08-14T21:48:33.6430921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:33.6431320Z layer_outputs = layer_module( 2025-08-14T21:48:33.6431666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:33.6432016Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:33.6432392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6432754Z return func(*args, **kwargs) 2025-08-14T21:48:33.6433104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6433454Z return func(*args, **kwargs) 2025-08-14T21:48:33.6433806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6434165Z return func(*args, **kwargs) 2025-08-14T21:48:33.6434543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:33.6434954Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:33.6435355Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:33.6435743Z return forward_fn(*input_tensors) 2025-08-14T21:48:33.6436170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:33.6436650Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:33.6437094Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:33.6437527Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:33.6437897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:33.6438273Z return self.act(input) 2025-08-14T21:48:33.6438392Z 2025-08-14T21:48:33.6438479Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6438711Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6438919Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6439153Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6439355Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6439612Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6439818Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6440019Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6440212Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6440410Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6440607Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6440832Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:33.6441192Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:33.6441518Z return mod(**inputs) 2025-08-14T21:48:33.6442049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:33.6442402Z output = func(self, *args, **kwargs) 2025-08-14T21:48:33.6442801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:48:33.6443192Z outputs = self.layoutlm( 2025-08-14T21:48:33.6443540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6443906Z return func(*args, **kwargs) 2025-08-14T21:48:33.6444264Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6444628Z return func(*args, **kwargs) 2025-08-14T21:48:33.6444952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:33.6445298Z output = func(self, *args, **kwargs) 2025-08-14T21:48:33.6445692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:33.6446079Z encoder_outputs = self.encoder( 2025-08-14T21:48:33.6446440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6446800Z return func(*args, **kwargs) 2025-08-14T21:48:33.6447145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6447494Z return func(*args, **kwargs) 2025-08-14T21:48:33.6447842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6448197Z return func(*args, **kwargs) 2025-08-14T21:48:33.6448379Z [Previous line repeated 1 more time] 2025-08-14T21:48:33.6448721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:33.6449066Z output = func(self, *args, **kwargs) 2025-08-14T21:48:33.6449455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:33.6449840Z layer_outputs = layer_module( 2025-08-14T21:48:33.6450185Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:33.6450540Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:33.6450901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6451262Z return func(*args, **kwargs) 2025-08-14T21:48:33.6451608Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6452006Z return func(*args, **kwargs) 2025-08-14T21:48:33.6452351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6452744Z return func(*args, **kwargs) 2025-08-14T21:48:33.6453164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:33.6453617Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:33.6454014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:33.6454405Z return forward_fn(*input_tensors) 2025-08-14T21:48:33.6454825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:33.6455283Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:33.6455726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:33.6456157Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:33.6456532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:33.6456864Z return self.act(input) 2025-08-14T21:48:33.6456987Z 2025-08-14T21:48:33.6457070Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6457282Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6457481Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6457691Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6457895Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6458101Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6458303Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6458509Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6458716Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6458914Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6459121Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6459359Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:33.6459804Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:33.6460141Z return mod(**inputs) 2025-08-14T21:48:33.6460493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:33.6460844Z output = func(self, *args, **kwargs) 2025-08-14T21:48:33.6461236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:48:33.6461637Z outputs = self.layoutlm( 2025-08-14T21:48:33.6462005Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6462380Z return func(*args, **kwargs) 2025-08-14T21:48:33.6462736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6463106Z return func(*args, **kwargs) 2025-08-14T21:48:33.6463443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:33.6463783Z output = func(self, *args, **kwargs) 2025-08-14T21:48:33.6464179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:33.6464578Z encoder_outputs = self.encoder( 2025-08-14T21:48:33.6464934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6465296Z return func(*args, **kwargs) 2025-08-14T21:48:33.6465680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6466040Z return func(*args, **kwargs) 2025-08-14T21:48:33.6466399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6466790Z return func(*args, **kwargs) 2025-08-14T21:48:33.6466982Z [Previous line repeated 1 more time] 2025-08-14T21:48:33.6467339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:33.6467688Z output = func(self, *args, **kwargs) 2025-08-14T21:48:33.6468099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:33.6468513Z layer_outputs = layer_module( 2025-08-14T21:48:33.6468864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:33.6469229Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:33.6469606Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6469964Z return func(*args, **kwargs) 2025-08-14T21:48:33.6470323Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6470690Z return func(*args, **kwargs) 2025-08-14T21:48:33.6471048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6471407Z return func(*args, **kwargs) 2025-08-14T21:48:33.6471794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:33.6472208Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:33.6472614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:33.6473001Z return forward_fn(*input_tensors) 2025-08-14T21:48:33.6473456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:33.6473931Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:33.6474366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:33.6474801Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:33.6475179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:33.6475516Z return self.act(input) 2025-08-14T21:48:33.6475625Z 2025-08-14T21:48:33.6475704Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6475914Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6476120Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6476317Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6476518Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6476722Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6476923Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6477119Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6477319Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6477520Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6477719Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6477957Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:33.6478318Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:33.6478634Z return mod(**inputs) 2025-08-14T21:48:33.6478963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:33.6479314Z output = func(self, *args, **kwargs) 2025-08-14T21:48:33.6479738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:48:33.6480143Z outputs = self.layoutlm( 2025-08-14T21:48:33.6480493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6480877Z return func(*args, **kwargs) 2025-08-14T21:48:33.6481233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6481586Z return func(*args, **kwargs) 2025-08-14T21:48:33.6481908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:33.6482244Z output = func(self, *args, **kwargs) 2025-08-14T21:48:33.6482622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:33.6483005Z encoder_outputs = self.encoder( 2025-08-14T21:48:33.6483359Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6483707Z return func(*args, **kwargs) 2025-08-14T21:48:33.6484050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6484399Z return func(*args, **kwargs) 2025-08-14T21:48:33.6484744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6485088Z return func(*args, **kwargs) 2025-08-14T21:48:33.6485278Z [Previous line repeated 1 more time] 2025-08-14T21:48:33.6485612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:33.6485938Z output = func(self, *args, **kwargs) 2025-08-14T21:48:33.6486326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:33.6486710Z layer_outputs = layer_module( 2025-08-14T21:48:33.6487041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:33.6487381Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:33.6487742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6488093Z return func(*args, **kwargs) 2025-08-14T21:48:33.6488433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6488773Z return func(*args, **kwargs) 2025-08-14T21:48:33.6489114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6489462Z return func(*args, **kwargs) 2025-08-14T21:48:33.6489824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:33.6490221Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:33.6490607Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:33.6490985Z return forward_fn(*input_tensors) 2025-08-14T21:48:33.6491388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:33.6491844Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:33.6492277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:33.6492694Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:33.6493071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:33.6493400Z return self.act(input) 2025-08-14T21:48:33.6493526Z 2025-08-14T21:48:33.6493613Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6493816Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6494035Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6494234Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6495178Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6495386Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6495582Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6495778Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6495964Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6496158Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6496352Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6496572Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:33.6496920Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:33.6497236Z return mod(**inputs) 2025-08-14T21:48:33.6497543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:33.6497885Z output = func(self, *args, **kwargs) 2025-08-14T21:48:33.6498269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:48:33.6498650Z outputs = self.layoutlm( 2025-08-14T21:48:33.6498986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6499347Z return func(*args, **kwargs) 2025-08-14T21:48:33.6499790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6500157Z return func(*args, **kwargs) 2025-08-14T21:48:33.6500515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:33.6500869Z output = func(self, *args, **kwargs) 2025-08-14T21:48:33.6501266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:33.6501659Z encoder_outputs = self.encoder( 2025-08-14T21:48:33.6502033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6502394Z return func(*args, **kwargs) 2025-08-14T21:48:33.6502744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6503110Z return func(*args, **kwargs) 2025-08-14T21:48:33.6503467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6503831Z return func(*args, **kwargs) 2025-08-14T21:48:33.6504020Z [Previous line repeated 1 more time] 2025-08-14T21:48:33.6504371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:33.6504716Z output = func(self, *args, **kwargs) 2025-08-14T21:48:33.6505107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:33.6505506Z layer_outputs = layer_module( 2025-08-14T21:48:33.6505847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:33.6506205Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:33.6506571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6506936Z return func(*args, **kwargs) 2025-08-14T21:48:33.6507315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6507680Z return func(*args, **kwargs) 2025-08-14T21:48:33.6508042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6508422Z return func(*args, **kwargs) 2025-08-14T21:48:33.6508820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:33.6509228Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:33.6509635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:33.6510020Z return forward_fn(*input_tensors) 2025-08-14T21:48:33.6510444Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:33.6510909Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:33.6511350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:33.6511785Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:33.6512163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:33.6512490Z return self.act(input) 2025-08-14T21:48:33.6512607Z 2025-08-14T21:48:33.6512685Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6512890Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6513088Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6513290Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6513492Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6513686Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6513889Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6514092Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6514294Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6514489Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6514690Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6514934Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:33.6515278Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:33.6515597Z return mod(**inputs) 2025-08-14T21:48:33.6515914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:33.6516251Z output = func(self, *args, **kwargs) 2025-08-14T21:48:33.6516637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:48:33.6517025Z outputs = self.layoutlm( 2025-08-14T21:48:33.6517375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6517724Z return func(*args, **kwargs) 2025-08-14T21:48:33.6518072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6518430Z return func(*args, **kwargs) 2025-08-14T21:48:33.6518750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:33.6519091Z output = func(self, *args, **kwargs) 2025-08-14T21:48:33.6519478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:33.6519869Z encoder_outputs = self.encoder( 2025-08-14T21:48:33.6520219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6520573Z return func(*args, **kwargs) 2025-08-14T21:48:33.6520936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6521317Z return func(*args, **kwargs) 2025-08-14T21:48:33.6521657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6522031Z return func(*args, **kwargs) 2025-08-14T21:48:33.6522217Z [Previous line repeated 1 more time] 2025-08-14T21:48:33.6522565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:33.6522903Z output = func(self, *args, **kwargs) 2025-08-14T21:48:33.6523284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:33.6523667Z layer_outputs = layer_module( 2025-08-14T21:48:33.6523989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:33.6524335Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:33.6524697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6525043Z return func(*args, **kwargs) 2025-08-14T21:48:33.6525383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6525731Z return func(*args, **kwargs) 2025-08-14T21:48:33.6526071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6526412Z return func(*args, **kwargs) 2025-08-14T21:48:33.6526779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:33.6527171Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:33.6527546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:33.6527927Z return forward_fn(*input_tensors) 2025-08-14T21:48:33.6528332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:33.6528783Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:33.6529201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:33.6529621Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:33.6529980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:33.6530300Z return self.act(input) 2025-08-14T21:48:33.6530407Z 2025-08-14T21:48:33.6530483Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6530680Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6530879Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6531071Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6531268Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6531464Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6531657Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6531954Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6532153Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6532349Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6532537Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6532761Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:33.6533109Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:33.6533412Z return mod(**inputs) 2025-08-14T21:48:33.6533728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:33.6534098Z output = func(self, *args, **kwargs) 2025-08-14T21:48:33.6534496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:48:33.6534902Z outputs = self.layoutlm( 2025-08-14T21:48:33.6535243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6535618Z return func(*args, **kwargs) 2025-08-14T21:48:33.6535966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6536331Z return func(*args, **kwargs) 2025-08-14T21:48:33.6536663Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:33.6537007Z output = func(self, *args, **kwargs) 2025-08-14T21:48:33.6537394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:33.6537785Z encoder_outputs = self.encoder( 2025-08-14T21:48:33.6538148Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6538503Z return func(*args, **kwargs) 2025-08-14T21:48:33.6538862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6539224Z return func(*args, **kwargs) 2025-08-14T21:48:33.6539659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6540037Z return func(*args, **kwargs) 2025-08-14T21:48:33.6540241Z [Previous line repeated 1 more time] 2025-08-14T21:48:33.6540596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:33.6540954Z output = func(self, *args, **kwargs) 2025-08-14T21:48:33.6541352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:33.6541939Z layer_outputs = layer_module( 2025-08-14T21:48:33.6542302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:33.6542650Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:33.6543021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6543379Z return func(*args, **kwargs) 2025-08-14T21:48:33.6543722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6544080Z return func(*args, **kwargs) 2025-08-14T21:48:33.6544427Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6544787Z return func(*args, **kwargs) 2025-08-14T21:48:33.6545158Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:33.6545568Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:33.6545966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:33.6546352Z return forward_fn(*input_tensors) 2025-08-14T21:48:33.6546760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:33.6547227Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:33.6547662Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:33.6548148Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:33.6548524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:33.6548888Z return self.act(input) 2025-08-14T21:48:33.6548998Z 2025-08-14T21:48:33.6549118Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6549326Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6549535Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6549770Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6549967Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6550173Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6550375Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6550568Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6550773Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6550978Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6551176Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6551402Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:33.6551759Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:33.6552082Z return mod(**inputs) 2025-08-14T21:48:33.6552402Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:33.6552760Z output = func(self, *args, **kwargs) 2025-08-14T21:48:33.6553157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:48:33.6553550Z outputs = self.layoutlm( 2025-08-14T21:48:33.6553900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6554267Z return func(*args, **kwargs) 2025-08-14T21:48:33.6554620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6554981Z return func(*args, **kwargs) 2025-08-14T21:48:33.6555307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:33.6555645Z output = func(self, *args, **kwargs) 2025-08-14T21:48:33.6556029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:33.6556408Z encoder_outputs = self.encoder( 2025-08-14T21:48:33.6556762Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6557111Z return func(*args, **kwargs) 2025-08-14T21:48:33.6557445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6557798Z return func(*args, **kwargs) 2025-08-14T21:48:33.6558139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6558488Z return func(*args, **kwargs) 2025-08-14T21:48:33.6558669Z [Previous line repeated 1 more time] 2025-08-14T21:48:33.6559003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:33.6559341Z output = func(self, *args, **kwargs) 2025-08-14T21:48:33.6559720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:33.6560107Z layer_outputs = layer_module( 2025-08-14T21:48:33.6560438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:33.6560783Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:33.6561135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6561511Z return func(*args, **kwargs) 2025-08-14T21:48:33.6561853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6562215Z return func(*args, **kwargs) 2025-08-14T21:48:33.6562581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6562936Z return func(*args, **kwargs) 2025-08-14T21:48:33.6563328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:33.6563719Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:33.6564107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:33.6564478Z return forward_fn(*input_tensors) 2025-08-14T21:48:33.6564889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:33.6565341Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:33.6565771Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:33.6566194Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:33.6566549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:33.6566877Z return self.act(input) 2025-08-14T21:48:33.6566990Z 2025-08-14T21:48:33.6567065Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6567265Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6567452Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6567648Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6567843Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6568028Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6568227Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6568421Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6568610Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6568805Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6569002Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6569228Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:33.6569568Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:33.6569882Z return mod(**inputs) 2025-08-14T21:48:33.6570199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:33.6570530Z output = func(self, *args, **kwargs) 2025-08-14T21:48:33.6570912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:48:33.6571292Z outputs = self.layoutlm( 2025-08-14T21:48:33.6571635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6571980Z return func(*args, **kwargs) 2025-08-14T21:48:33.6572322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6572671Z return func(*args, **kwargs) 2025-08-14T21:48:33.6572984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:33.6573320Z output = func(self, *args, **kwargs) 2025-08-14T21:48:33.6573699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:33.6574080Z encoder_outputs = self.encoder( 2025-08-14T21:48:33.6574426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6574808Z return func(*args, **kwargs) 2025-08-14T21:48:33.6575158Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6575518Z return func(*args, **kwargs) 2025-08-14T21:48:33.6575879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6576250Z return func(*args, **kwargs) 2025-08-14T21:48:33.6576441Z [Previous line repeated 1 more time] 2025-08-14T21:48:33.6576767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:33.6577106Z output = func(self, *args, **kwargs) 2025-08-14T21:48:33.6577489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:33.6577862Z layer_outputs = layer_module( 2025-08-14T21:48:33.6578196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:33.6578548Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:33.6578912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6579299Z return func(*args, **kwargs) 2025-08-14T21:48:33.6579735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6580114Z return func(*args, **kwargs) 2025-08-14T21:48:33.6580487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6580839Z return func(*args, **kwargs) 2025-08-14T21:48:33.6581227Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:33.6581647Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:33.6582048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:33.6582446Z return forward_fn(*input_tensors) 2025-08-14T21:48:33.6582881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:33.6583364Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:33.6583804Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:33.6584245Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:33.6584624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:33.6584969Z return self.act(input) 2025-08-14T21:48:33.6585082Z 2025-08-14T21:48:33.6585164Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6585378Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6585589Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6585790Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6585999Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6586207Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6586404Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6586612Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6586818Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6587014Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6587220Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6587459Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:33.6587825Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:33.6588143Z return mod(**inputs) 2025-08-14T21:48:33.6588494Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:33.6588869Z output = func(self, *args, **kwargs) 2025-08-14T21:48:33.6589263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:48:33.6589689Z outputs = self.layoutlm( 2025-08-14T21:48:33.6590049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6590404Z return func(*args, **kwargs) 2025-08-14T21:48:33.6590744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6591095Z return func(*args, **kwargs) 2025-08-14T21:48:33.6591419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:33.6591753Z output = func(self, *args, **kwargs) 2025-08-14T21:48:33.6592144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:33.6592531Z encoder_outputs = self.encoder( 2025-08-14T21:48:33.6592885Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6593233Z return func(*args, **kwargs) 2025-08-14T21:48:33.6593577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6593928Z return func(*args, **kwargs) 2025-08-14T21:48:33.6594265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6594619Z return func(*args, **kwargs) 2025-08-14T21:48:33.6594808Z [Previous line repeated 1 more time] 2025-08-14T21:48:33.6595146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:33.6595478Z output = func(self, *args, **kwargs) 2025-08-14T21:48:33.6595861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:33.6596245Z layer_outputs = layer_module( 2025-08-14T21:48:33.6596575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:33.6596921Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:33.6597298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6597651Z return func(*args, **kwargs) 2025-08-14T21:48:33.6597984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6598335Z return func(*args, **kwargs) 2025-08-14T21:48:33.6598679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6599031Z return func(*args, **kwargs) 2025-08-14T21:48:33.6599393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:33.6599790Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:33.6600178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:33.6600549Z return forward_fn(*input_tensors) 2025-08-14T21:48:33.6600964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:33.6601422Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:33.6601870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:33.6602281Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:33.6602664Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:33.6603008Z return self.act(input) 2025-08-14T21:48:33.6603114Z 2025-08-14T21:48:33.6603196Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6603408Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6603607Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6603803Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6603992Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6604189Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6604386Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6604574Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6604769Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6604966Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6605158Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6605386Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:33.6605743Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:33.6606058Z return mod(**inputs) 2025-08-14T21:48:33.6606368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:33.6606708Z output = func(self, *args, **kwargs) 2025-08-14T21:48:33.6607094Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:48:33.6607471Z outputs = self.layoutlm( 2025-08-14T21:48:33.6607822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6608178Z return func(*args, **kwargs) 2025-08-14T21:48:33.6608526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6608874Z return func(*args, **kwargs) 2025-08-14T21:48:33.6609197Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:33.6609638Z output = func(self, *args, **kwargs) 2025-08-14T21:48:33.6610085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:33.6610797Z encoder_outputs = self.encoder( 2025-08-14T21:48:33.6611237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6611656Z return func(*args, **kwargs) 2025-08-14T21:48:33.6612059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6612499Z return func(*args, **kwargs) 2025-08-14T21:48:33.6612909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6613331Z return func(*args, **kwargs) 2025-08-14T21:48:33.6625549Z [Previous line repeated 1 more time] 2025-08-14T21:48:33.6626009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:33.6626391Z output = func(self, *args, **kwargs) 2025-08-14T21:48:33.6626814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:33.6627229Z layer_outputs = layer_module( 2025-08-14T21:48:33.6627577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:33.6627946Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:33.6628427Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6628799Z return func(*args, **kwargs) 2025-08-14T21:48:33.6629207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6629608Z return func(*args, **kwargs) 2025-08-14T21:48:33.6630000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6630361Z return func(*args, **kwargs) 2025-08-14T21:48:33.6630751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:33.6631165Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:33.6631561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:33.6631950Z return forward_fn(*input_tensors) 2025-08-14T21:48:33.6632378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:33.6632852Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:33.6633287Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:33.6633722Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:33.6634096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:33.6634442Z return self.act(input) 2025-08-14T21:48:33.6634559Z 2025-08-14T21:48:33.6634643Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6634856Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6635063Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6635257Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6635459Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6635662Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6635855Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6636056Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6636258Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6636450Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6636656Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6636893Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:33.6637257Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:33.6637574Z return mod(**inputs) 2025-08-14T21:48:33.6637902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:33.6638252Z output = func(self, *args, **kwargs) 2025-08-14T21:48:33.6638640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:48:33.6639037Z outputs = self.layoutlm( 2025-08-14T21:48:33.6639412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6639774Z return func(*args, **kwargs) 2025-08-14T21:48:33.6640120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6640479Z return func(*args, **kwargs) 2025-08-14T21:48:33.6640809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:33.6641149Z output = func(self, *args, **kwargs) 2025-08-14T21:48:33.6641542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 645, in forward 2025-08-14T21:48:33.6642147Z encoder_outputs = self.encoder( 2025-08-14T21:48:33.6642619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6643005Z return func(*args, **kwargs) 2025-08-14T21:48:33.6643365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6643749Z return func(*args, **kwargs) 2025-08-14T21:48:33.6644124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6644475Z return func(*args, **kwargs) 2025-08-14T21:48:33.6644665Z [Previous line repeated 1 more time] 2025-08-14T21:48:33.6645002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:33.6645331Z output = func(self, *args, **kwargs) 2025-08-14T21:48:33.6645714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 397, in forward 2025-08-14T21:48:33.6646103Z layer_outputs = layer_module( 2025-08-14T21:48:33.6646438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:48:33.6646780Z return super().__call__(*args, **kwargs) 2025-08-14T21:48:33.6647150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6647506Z return func(*args, **kwargs) 2025-08-14T21:48:33.6647842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6648196Z return func(*args, **kwargs) 2025-08-14T21:48:33.6648539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6648888Z return func(*args, **kwargs) 2025-08-14T21:48:33.6649251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 348, in forward 2025-08-14T21:48:33.6649645Z layer_output = apply_chunking_to_forward( 2025-08-14T21:48:33.6650037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:48:33.6650413Z return forward_fn(*input_tensors) 2025-08-14T21:48:33.6650837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 356, in feed_forward_chunk 2025-08-14T21:48:33.6651306Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:48:33.6651735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 295, in forward 2025-08-14T21:48:33.6652147Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:48:33.6652512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:48:33.6652838Z return self.act(input) 2025-08-14T21:48:33.6652946Z 2025-08-14T21:48:33.6653030Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6653230Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6653458Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:33.6653813Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:33.6654127Z return mod(**inputs) 2025-08-14T21:48:33.6654454Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:33.6654800Z output = func(self, *args, **kwargs) 2025-08-14T21:48:33.6655192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-08-14T21:48:33.6655583Z outputs = self.layoutlm( 2025-08-14T21:48:33.6655956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6656322Z return func(*args, **kwargs) 2025-08-14T21:48:33.6656686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:48:33.6657050Z return func(*args, **kwargs) 2025-08-14T21:48:33.6657398Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:33.6657783Z output = func(self, *args, **kwargs) 2025-08-14T21:48:33.6658176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 654, in forward 2025-08-14T21:48:33.6658599Z pooled_output = self.pooler(sequence_output) 2025-08-14T21:48:33.6659018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 431, in forward 2025-08-14T21:48:33.6659431Z pooled_output = self.activation(pooled_output) 2025-08-14T21:48:33.6659653Z 2025-08-14T21:48:33.6659738Z cudagraph partition due to non gpu ops 2025-08-14T21:48:33.6659980Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:33.6660354Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:33.6660682Z return mod(**inputs) 2025-08-14T21:48:33.6661007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:33.6661360Z output = func(self, *args, **kwargs) 2025-08-14T21:48:33.6661743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 911, in forward 2025-08-14T21:48:33.6662196Z loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) 2025-08-14T21:48:33.6662383Z 2025-08-14T21:48:33.6662489Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:48:33.6662857Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:48:33.6663173Z return mod(**inputs) 2025-08-14T21:48:33.6663501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:48:33.6663850Z output = func(self, *args, **kwargs) 2025-08-14T21:48:33.6664246Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 911, in forward 2025-08-14T21:48:33.6664683Z loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) 2025-08-14T21:48:33.6664867Z 2025-08-14T21:48:36.1785393Z Compilation time (from dynamo_timed): 28.350578025 2025-08-14T21:48:36.1793521Z pass 2025-08-14T21:48:36.1795820Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:48:36.1796961Z TIMING: _recursive_pre_grad_passes:0.07925 _recursive_joint_graph_passes:0.81666 _recursive_post_grad_passes:0.13907 async_compile.wait:0.73459 code_gen:9.47183 inductor_compile:12.52666 backend_compile:22.23064 gc:0.00279 entire_frame_compile:28.35058 total_wall_time:28.35058 2025-08-14T21:48:36.1798041Z STATS: call_* op count: 860 | FakeTensorMode.__torch_dispatch__:53425 | FakeTensor.__torch_dispatch__:7669 | ProxyTorchDispatchMode.__torch_dispatch__:13107 2025-08-14T21:48:36.1798613Z Dynamo produced 2 graphs covering 860 ops with 0 graph breaks (0 unique) 2025-08-14T21:48:41.8884599Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:48:41.8885510Z from pkg_resources import resource_filename 2025-08-14T21:48:42.4757351Z 2025-08-14T21:48:49.2886889Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:48:49.2891293Z loading model: 0it [00:06, ?it/s] 2025-08-14T21:48:49.2917105Z cpu eval M2M100ForConditionalGeneration 2025-08-14T21:48:50.2265966Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:48:50.6257499Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:48:51.0251288Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:49:12.1246218Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1249403Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1254201Z return mod(**inputs) 2025-08-14T21:49:12.1257454Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1257941Z outputs = self.model( 2025-08-14T21:49:12.1258372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:49:12.1258834Z encoder_outputs = self.encoder( 2025-08-14T21:49:12.1259276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 844, in forward 2025-08-14T21:49:12.1260043Z embed_pos = self.embed_positions(input_ids, inputs_embeds) 2025-08-14T21:49:12.1260533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 120, in decorate_context 2025-08-14T21:49:12.1260950Z return func(*args, **kwargs) 2025-08-14T21:49:12.1261369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 148, in forward 2025-08-14T21:49:12.1261959Z position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx, past_key_values_length).to( 2025-08-14T21:49:12.1262616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 80, in create_position_ids_from_input_ids 2025-08-14T21:49:12.1263180Z mask = input_ids.ne(padding_idx).int() 2025-08-14T21:49:12.1263341Z 2025-08-14T21:49:12.1263441Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1263685Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1263915Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1264141Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1264358Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1264583Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1264807Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1265025Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1265250Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1265474Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1265691Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1265915Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1266176Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1266577Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1266953Z return mod(**inputs) 2025-08-14T21:49:12.1267369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1267798Z outputs = self.model( 2025-08-14T21:49:12.1268204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:49:12.1268633Z encoder_outputs = self.encoder( 2025-08-14T21:49:12.1269059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 844, in forward 2025-08-14T21:49:12.1269547Z embed_pos = self.embed_positions(input_ids, inputs_embeds) 2025-08-14T21:49:12.1269950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 120, in decorate_context 2025-08-14T21:49:12.1270540Z return func(*args, **kwargs) 2025-08-14T21:49:12.1270950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 148, in forward 2025-08-14T21:49:12.1271585Z position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx, past_key_values_length).to( 2025-08-14T21:49:12.1272312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 81, in create_position_ids_from_input_ids 2025-08-14T21:49:12.1272912Z incremental_indices = (torch.cumsum(mask, dim=1).type_as(mask) + past_key_values_length) * mask 2025-08-14T21:49:12.1273167Z 2025-08-14T21:49:12.1273292Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1273677Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1274027Z return mod(**inputs) 2025-08-14T21:49:12.1274418Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1274827Z outputs = self.model( 2025-08-14T21:49:12.1275237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:49:12.1275655Z encoder_outputs = self.encoder( 2025-08-14T21:49:12.1276063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 844, in forward 2025-08-14T21:49:12.1276535Z embed_pos = self.embed_positions(input_ids, inputs_embeds) 2025-08-14T21:49:12.1276962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 120, in decorate_context 2025-08-14T21:49:12.1277348Z return func(*args, **kwargs) 2025-08-14T21:49:12.1277750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 148, in forward 2025-08-14T21:49:12.1278306Z position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx, past_key_values_length).to( 2025-08-14T21:49:12.1278918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 81, in create_position_ids_from_input_ids 2025-08-14T21:49:12.1279512Z incremental_indices = (torch.cumsum(mask, dim=1).type_as(mask) + past_key_values_length) * mask 2025-08-14T21:49:12.1279765Z 2025-08-14T21:49:12.1279866Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1280226Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1280454Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1280662Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1280869Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1281078Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1281294Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1281545Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1281940Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1282293Z return mod(**inputs) 2025-08-14T21:49:12.1282687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1283103Z outputs = self.model( 2025-08-14T21:49:12.1283483Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:49:12.1283901Z encoder_outputs = self.encoder( 2025-08-14T21:49:12.1284306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:49:12.1284728Z layer_outputs = encoder_layer( 2025-08-14T21:49:12.1285109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1285501Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1285942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:49:12.1286384Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:49:12.1286835Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1287304Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1287780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:12.1288297Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:12.1288501Z 2025-08-14T21:49:12.1288613Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1289002Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1289345Z return mod(**inputs) 2025-08-14T21:49:12.1289737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1290148Z outputs = self.model( 2025-08-14T21:49:12.1290535Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:49:12.1290943Z encoder_outputs = self.encoder( 2025-08-14T21:49:12.1291358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:49:12.1291767Z layer_outputs = encoder_layer( 2025-08-14T21:49:12.1292133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1292530Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1292947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:49:12.1293375Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:49:12.1293810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1294262Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1294752Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:12.1295259Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:12.1295438Z 2025-08-14T21:49:12.1295525Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1295759Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1296020Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1296413Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1296766Z return mod(**inputs) 2025-08-14T21:49:12.1297169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1297593Z outputs = self.model( 2025-08-14T21:49:12.1297985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:49:12.1298413Z encoder_outputs = self.encoder( 2025-08-14T21:49:12.1298832Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:49:12.1299246Z layer_outputs = encoder_layer( 2025-08-14T21:49:12.1299779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1300203Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1300686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 389, in forward 2025-08-14T21:49:12.1301157Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:49:12.1301402Z 2025-08-14T21:49:12.1301488Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1301721Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1301958Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1302178Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1302416Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1302633Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1302846Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1303066Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1303314Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1303695Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1304042Z return mod(**inputs) 2025-08-14T21:49:12.1304441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1304845Z outputs = self.model( 2025-08-14T21:49:12.1305234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:49:12.1305652Z encoder_outputs = self.encoder( 2025-08-14T21:49:12.1306065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:49:12.1306472Z layer_outputs = encoder_layer( 2025-08-14T21:49:12.1306849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1307250Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1307697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:49:12.1308140Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:49:12.1308571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1309030Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1309519Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:12.1310037Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:12.1310237Z 2025-08-14T21:49:12.1310348Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1310746Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1311096Z return mod(**inputs) 2025-08-14T21:49:12.1311484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1311889Z outputs = self.model( 2025-08-14T21:49:12.1312281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:49:12.1312704Z encoder_outputs = self.encoder( 2025-08-14T21:49:12.1313103Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:49:12.1313540Z layer_outputs = encoder_layer( 2025-08-14T21:49:12.1313928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1314322Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1314728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:49:12.1315168Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:49:12.1315651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1316085Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1316574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:12.1317086Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:12.1317267Z 2025-08-14T21:49:12.1317385Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1317621Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1317883Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1318280Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1318639Z return mod(**inputs) 2025-08-14T21:49:12.1319028Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1319463Z outputs = self.model( 2025-08-14T21:49:12.1319937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:49:12.1320345Z encoder_outputs = self.encoder( 2025-08-14T21:49:12.1320781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:49:12.1321201Z layer_outputs = encoder_layer( 2025-08-14T21:49:12.1321590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1321975Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1322382Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 389, in forward 2025-08-14T21:49:12.1322838Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:49:12.1323030Z 2025-08-14T21:49:12.1323115Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1323346Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1323562Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1323783Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1324000Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1324215Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1324436Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1324660Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1324909Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1325302Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1325656Z return mod(**inputs) 2025-08-14T21:49:12.1326055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1326468Z outputs = self.model( 2025-08-14T21:49:12.1326856Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:49:12.1327282Z encoder_outputs = self.encoder( 2025-08-14T21:49:12.1327704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:49:12.1328136Z layer_outputs = encoder_layer( 2025-08-14T21:49:12.1328517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1328921Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1329324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:49:12.1329768Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:49:12.1330198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1330690Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1331174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:12.1331717Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:12.1331946Z 2025-08-14T21:49:12.1332062Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1332482Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1332834Z return mod(**inputs) 2025-08-14T21:49:12.1333237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1333675Z outputs = self.model( 2025-08-14T21:49:12.1334073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:49:12.1334517Z encoder_outputs = self.encoder( 2025-08-14T21:49:12.1334934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:49:12.1335361Z layer_outputs = encoder_layer( 2025-08-14T21:49:12.1335742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1336143Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1336568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:49:12.1337009Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:49:12.1337436Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1337888Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1338365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:12.1338855Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:12.1339041Z 2025-08-14T21:49:12.1339127Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1339363Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1339729Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1340123Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1340489Z return mod(**inputs) 2025-08-14T21:49:12.1340890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1341316Z outputs = self.model( 2025-08-14T21:49:12.1341719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:49:12.1342365Z encoder_outputs = self.encoder( 2025-08-14T21:49:12.1342794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:49:12.1343218Z layer_outputs = encoder_layer( 2025-08-14T21:49:12.1343611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1344015Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1344435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 389, in forward 2025-08-14T21:49:12.1344917Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:49:12.1345116Z 2025-08-14T21:49:12.1345203Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1345435Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1345653Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1345883Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1346199Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1346422Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1346684Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1346917Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1347216Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1347652Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1348015Z return mod(**inputs) 2025-08-14T21:49:12.1348414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1348840Z outputs = self.model( 2025-08-14T21:49:12.1349243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:49:12.1349683Z encoder_outputs = self.encoder( 2025-08-14T21:49:12.1350103Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:49:12.1350537Z layer_outputs = encoder_layer( 2025-08-14T21:49:12.1350923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1351324Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1351748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:49:12.1352202Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:49:12.1352647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1353110Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1353592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:12.1354126Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:12.1354328Z 2025-08-14T21:49:12.1354449Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1354850Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1355197Z return mod(**inputs) 2025-08-14T21:49:12.1355616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1356050Z outputs = self.model( 2025-08-14T21:49:12.1356453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:49:12.1356892Z encoder_outputs = self.encoder( 2025-08-14T21:49:12.1357309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:49:12.1357742Z layer_outputs = encoder_layer( 2025-08-14T21:49:12.1358134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1358525Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1358943Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:49:12.1359407Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:49:12.1359858Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1360310Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1360804Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:12.1361305Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:12.1361491Z 2025-08-14T21:49:12.1361625Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1361862Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1362141Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1362531Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1362945Z return mod(**inputs) 2025-08-14T21:49:12.1363371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1363787Z outputs = self.model( 2025-08-14T21:49:12.1364196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:49:12.1364628Z encoder_outputs = self.encoder( 2025-08-14T21:49:12.1365046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:49:12.1365477Z layer_outputs = encoder_layer( 2025-08-14T21:49:12.1365873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1366281Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1366702Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 389, in forward 2025-08-14T21:49:12.1367180Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:49:12.1367380Z 2025-08-14T21:49:12.1367468Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1367705Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1367928Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1368156Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1368380Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1368597Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1368822Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1369056Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1369300Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1369696Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1370058Z return mod(**inputs) 2025-08-14T21:49:12.1370460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1370875Z outputs = self.model( 2025-08-14T21:49:12.1371276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:49:12.1371718Z encoder_outputs = self.encoder( 2025-08-14T21:49:12.1372139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:49:12.1372569Z layer_outputs = encoder_layer( 2025-08-14T21:49:12.1372956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1373352Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1373772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:49:12.1374228Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:49:12.1374670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1375123Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1375618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:12.1376144Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:12.1376346Z 2025-08-14T21:49:12.1376469Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1376902Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1377273Z return mod(**inputs) 2025-08-14T21:49:12.1377672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1378933Z outputs = self.model( 2025-08-14T21:49:12.1379347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:49:12.1379888Z encoder_outputs = self.encoder( 2025-08-14T21:49:12.1380316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:49:12.1380752Z layer_outputs = encoder_layer( 2025-08-14T21:49:12.1381141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1381551Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1381986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:49:12.1382427Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:49:12.1382876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1383338Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1383833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:12.1384335Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:12.1384526Z 2025-08-14T21:49:12.1384615Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1384849Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1385109Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1385507Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1385873Z return mod(**inputs) 2025-08-14T21:49:12.1386277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1386697Z outputs = self.model( 2025-08-14T21:49:12.1387124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:49:12.1387561Z encoder_outputs = self.encoder( 2025-08-14T21:49:12.1387982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:49:12.1388415Z layer_outputs = encoder_layer( 2025-08-14T21:49:12.1388800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1389211Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1389642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 389, in forward 2025-08-14T21:49:12.1390123Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:49:12.1390326Z 2025-08-14T21:49:12.1390415Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1390656Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1390870Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1391096Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1391318Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1391531Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1391762Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1391983Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1392228Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1392650Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1393010Z return mod(**inputs) 2025-08-14T21:49:12.1393425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1393848Z outputs = self.model( 2025-08-14T21:49:12.1394255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:49:12.1394672Z encoder_outputs = self.encoder( 2025-08-14T21:49:12.1395068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:49:12.1395494Z layer_outputs = encoder_layer( 2025-08-14T21:49:12.1395871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1396263Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1396670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:49:12.1397102Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:49:12.1397533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1397969Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1398437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:12.1398947Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:12.1399140Z 2025-08-14T21:49:12.1399260Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1399639Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1399989Z return mod(**inputs) 2025-08-14T21:49:12.1400376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1400784Z outputs = self.model( 2025-08-14T21:49:12.1401162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:49:12.1401579Z encoder_outputs = self.encoder( 2025-08-14T21:49:12.1401988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:49:12.1402403Z layer_outputs = encoder_layer( 2025-08-14T21:49:12.1402770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1403154Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1403575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:49:12.1403996Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:49:12.1404423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1404861Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1405334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:12.1405814Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:12.1405994Z 2025-08-14T21:49:12.1406081Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1406309Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1406556Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1406943Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1407292Z return mod(**inputs) 2025-08-14T21:49:12.1407709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1408138Z outputs = self.model( 2025-08-14T21:49:12.1408529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:49:12.1408962Z encoder_outputs = self.encoder( 2025-08-14T21:49:12.1409399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:49:12.1409809Z layer_outputs = encoder_layer( 2025-08-14T21:49:12.1410190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1410585Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1410994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 389, in forward 2025-08-14T21:49:12.1411480Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:49:12.1411677Z 2025-08-14T21:49:12.1411762Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1411986Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1412205Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1412425Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1412645Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1412861Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1413081Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1413300Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1413544Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1413932Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1414283Z return mod(**inputs) 2025-08-14T21:49:12.1414676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1415081Z outputs = self.model( 2025-08-14T21:49:12.1415473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:49:12.1415894Z encoder_outputs = self.encoder( 2025-08-14T21:49:12.1416299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:49:12.1416711Z layer_outputs = encoder_layer( 2025-08-14T21:49:12.1417085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1417469Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1417878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:49:12.1418329Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:49:12.1418773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1419278Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1419883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:12.1420414Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:12.1420615Z 2025-08-14T21:49:12.1420745Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1421125Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1421492Z return mod(**inputs) 2025-08-14T21:49:12.1421896Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1422332Z outputs = self.model( 2025-08-14T21:49:12.1422764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:49:12.1423229Z encoder_outputs = self.encoder( 2025-08-14T21:49:12.1423652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:49:12.1424088Z layer_outputs = encoder_layer( 2025-08-14T21:49:12.1424485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1424884Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1425320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:49:12.1425772Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:49:12.1426223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1426679Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1427181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:12.1427687Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:12.1427875Z 2025-08-14T21:49:12.1427965Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1428204Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1428459Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1428860Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1429221Z return mod(**inputs) 2025-08-14T21:49:12.1429625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1430053Z outputs = self.model( 2025-08-14T21:49:12.1430454Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:49:12.1430851Z encoder_outputs = self.encoder( 2025-08-14T21:49:12.1431230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:49:12.1431626Z layer_outputs = encoder_layer( 2025-08-14T21:49:12.1431983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1432351Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1432741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 389, in forward 2025-08-14T21:49:12.1433200Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:49:12.1433385Z 2025-08-14T21:49:12.1433480Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1433709Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1433929Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1434154Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1434373Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1434590Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1434809Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1435030Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1435274Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1435667Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1436017Z return mod(**inputs) 2025-08-14T21:49:12.1436410Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1436826Z outputs = self.model( 2025-08-14T21:49:12.1437244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:49:12.1437649Z encoder_outputs = self.encoder( 2025-08-14T21:49:12.1438060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:49:12.1438462Z layer_outputs = encoder_layer( 2025-08-14T21:49:12.1438830Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1439193Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1439580Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:49:12.1440008Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:49:12.1440439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1440871Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1441344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:12.1442077Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:12.1442281Z 2025-08-14T21:49:12.1442406Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1442787Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1443138Z return mod(**inputs) 2025-08-14T21:49:12.1443529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1443940Z outputs = self.model( 2025-08-14T21:49:12.1444322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:49:12.1444736Z encoder_outputs = self.encoder( 2025-08-14T21:49:12.1445146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:49:12.1445550Z layer_outputs = encoder_layer( 2025-08-14T21:49:12.1445925Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1446308Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1446723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:49:12.1447144Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:49:12.1447572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1448007Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1448479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:12.1448953Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:12.1449132Z 2025-08-14T21:49:12.1449217Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1449446Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1449690Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1450073Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1450421Z return mod(**inputs) 2025-08-14T21:49:12.1450809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1451207Z outputs = self.model( 2025-08-14T21:49:12.1451592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:49:12.1452019Z encoder_outputs = self.encoder( 2025-08-14T21:49:12.1452491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:49:12.1452924Z layer_outputs = encoder_layer( 2025-08-14T21:49:12.1453296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1453718Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1454164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 389, in forward 2025-08-14T21:49:12.1454625Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:49:12.1454811Z 2025-08-14T21:49:12.1454902Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1455122Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1455335Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1455554Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1455776Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1455992Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1456212Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1456430Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1456673Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1457059Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1457409Z return mod(**inputs) 2025-08-14T21:49:12.1457789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1458198Z outputs = self.model( 2025-08-14T21:49:12.1458586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:49:12.1459018Z encoder_outputs = self.encoder( 2025-08-14T21:49:12.1459475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:49:12.1459922Z layer_outputs = encoder_layer( 2025-08-14T21:49:12.1460310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1460720Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1461135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:49:12.1461569Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:49:12.1461997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1462425Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1462915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:12.1463429Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:12.1463622Z 2025-08-14T21:49:12.1463743Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1464121Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1464487Z return mod(**inputs) 2025-08-14T21:49:12.1464885Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1465298Z outputs = self.model( 2025-08-14T21:49:12.1465678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:49:12.1466066Z encoder_outputs = self.encoder( 2025-08-14T21:49:12.1466445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:49:12.1466864Z layer_outputs = encoder_layer( 2025-08-14T21:49:12.1467279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1467691Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1468110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:49:12.1468536Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:49:12.1468970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1469451Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1469957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:12.1470405Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:12.1470576Z 2025-08-14T21:49:12.1470661Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1470876Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1471108Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1471473Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1471800Z return mod(**inputs) 2025-08-14T21:49:12.1472166Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1472543Z outputs = self.model( 2025-08-14T21:49:12.1472907Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:49:12.1473294Z encoder_outputs = self.encoder( 2025-08-14T21:49:12.1473671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:49:12.1474054Z layer_outputs = encoder_layer( 2025-08-14T21:49:12.1474410Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1474775Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1475164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 389, in forward 2025-08-14T21:49:12.1475598Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:49:12.1475772Z 2025-08-14T21:49:12.1475861Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1476067Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1476279Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1476491Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1476711Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1476926Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1477155Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1477365Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1477596Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1477979Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1478326Z return mod(**inputs) 2025-08-14T21:49:12.1478699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1479093Z outputs = self.model( 2025-08-14T21:49:12.1479461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:49:12.1479854Z encoder_outputs = self.encoder( 2025-08-14T21:49:12.1480229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:49:12.1480619Z layer_outputs = encoder_layer( 2025-08-14T21:49:12.1480998Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1481369Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1481797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:49:12.1482247Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:49:12.1482703Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1483133Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1483607Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:12.1484113Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:12.1484307Z 2025-08-14T21:49:12.1484427Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1484806Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1485158Z return mod(**inputs) 2025-08-14T21:49:12.1485547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1485957Z outputs = self.model( 2025-08-14T21:49:12.1486340Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:49:12.1486754Z encoder_outputs = self.encoder( 2025-08-14T21:49:12.1487158Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:49:12.1487563Z layer_outputs = encoder_layer( 2025-08-14T21:49:12.1487933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1488316Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1488730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:49:12.1489163Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:49:12.1489602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1490052Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1490527Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:12.1491024Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:12.1491206Z 2025-08-14T21:49:12.1491304Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1491528Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1491771Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1492158Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1492511Z return mod(**inputs) 2025-08-14T21:49:12.1492894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1493308Z outputs = self.model( 2025-08-14T21:49:12.1493697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:49:12.1494119Z encoder_outputs = self.encoder( 2025-08-14T21:49:12.1494516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:49:12.1494942Z layer_outputs = encoder_layer( 2025-08-14T21:49:12.1495313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1495707Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1496139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 389, in forward 2025-08-14T21:49:12.1496616Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:49:12.1496826Z 2025-08-14T21:49:12.1496923Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1497141Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1497414Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1497646Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1497870Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1498086Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1498313Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1498538Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1498787Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1499189Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1499636Z return mod(**inputs) 2025-08-14T21:49:12.1500033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1500462Z outputs = self.model( 2025-08-14T21:49:12.1500863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:49:12.1501297Z encoder_outputs = self.encoder( 2025-08-14T21:49:12.1501708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:49:12.1502130Z layer_outputs = encoder_layer( 2025-08-14T21:49:12.1502514Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1502902Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1503329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:49:12.1503771Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:49:12.1504210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1504659Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1505144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:12.1505660Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:12.1505857Z 2025-08-14T21:49:12.1505978Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1506361Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1506714Z return mod(**inputs) 2025-08-14T21:49:12.1507115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1507539Z outputs = self.model( 2025-08-14T21:49:12.1507934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:49:12.1508364Z encoder_outputs = self.encoder( 2025-08-14T21:49:12.1508783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:49:12.1509206Z layer_outputs = encoder_layer( 2025-08-14T21:49:12.1509599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1510006Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1510434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:49:12.1510904Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:49:12.1511358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1511836Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1512354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:12.1512876Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:12.1513055Z 2025-08-14T21:49:12.1513142Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1513369Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1513614Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1513998Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1514371Z return mod(**inputs) 2025-08-14T21:49:12.1514756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1515170Z outputs = self.model( 2025-08-14T21:49:12.1515557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:49:12.1515973Z encoder_outputs = self.encoder( 2025-08-14T21:49:12.1516599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:49:12.1517019Z layer_outputs = encoder_layer( 2025-08-14T21:49:12.1517400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1517797Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1518206Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 389, in forward 2025-08-14T21:49:12.1518674Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:49:12.1518860Z 2025-08-14T21:49:12.1518957Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1519178Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1519407Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1519633Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1519845Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1520066Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1520286Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1520506Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1520746Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1521130Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1521476Z return mod(**inputs) 2025-08-14T21:49:12.1521857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1522264Z outputs = self.model( 2025-08-14T21:49:12.1522650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:49:12.1523063Z encoder_outputs = self.encoder( 2025-08-14T21:49:12.1523461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:49:12.1523877Z layer_outputs = encoder_layer( 2025-08-14T21:49:12.1524253Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1524629Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1525044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:49:12.1525472Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:49:12.1525929Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1526388Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1526860Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:12.1527402Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:12.1527600Z 2025-08-14T21:49:12.1527719Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1528099Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1528446Z return mod(**inputs) 2025-08-14T21:49:12.1528837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1529239Z outputs = self.model( 2025-08-14T21:49:12.1529633Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:49:12.1530052Z encoder_outputs = self.encoder( 2025-08-14T21:49:12.1530460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:49:12.1530863Z layer_outputs = encoder_layer( 2025-08-14T21:49:12.1531238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1531629Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1532049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-08-14T21:49:12.1532498Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:49:12.1532933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1533388Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1533868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:12.1534373Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:12.1534555Z 2025-08-14T21:49:12.1534643Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1534874Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1535120Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1535513Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1535891Z return mod(**inputs) 2025-08-14T21:49:12.1536285Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1536718Z outputs = self.model( 2025-08-14T21:49:12.1537123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-08-14T21:49:12.1537548Z encoder_outputs = self.encoder( 2025-08-14T21:49:12.1537957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-08-14T21:49:12.1538384Z layer_outputs = encoder_layer( 2025-08-14T21:49:12.1538768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1539159Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1539664Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 389, in forward 2025-08-14T21:49:12.1540149Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:49:12.1540337Z 2025-08-14T21:49:12.1540430Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1540662Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1540917Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1541142Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1541386Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1541606Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1542043Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1542270Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1542578Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1542981Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1543342Z return mod(**inputs) 2025-08-14T21:49:12.1543736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1544160Z outputs = self.model( 2025-08-14T21:49:12.1544548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1544964Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1545371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1545785Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1546174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1546565Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1546995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:49:12.1547450Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:12.1547904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1548345Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1548839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:12.1549363Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:12.1549565Z 2025-08-14T21:49:12.1549688Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1550074Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1550430Z return mod(**inputs) 2025-08-14T21:49:12.1550829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1551240Z outputs = self.model( 2025-08-14T21:49:12.1551640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1552066Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1552480Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1552898Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1553279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1553681Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1554101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:49:12.1554573Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:12.1555023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1555471Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1555987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:12.1556489Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:12.1556699Z 2025-08-14T21:49:12.1556786Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1557019Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1557270Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1557494Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1557733Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1557955Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1558180Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1558401Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1558649Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1559045Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1559412Z return mod(**inputs) 2025-08-14T21:49:12.1559812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1560231Z outputs = self.model( 2025-08-14T21:49:12.1560630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1561058Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1561468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1561890Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1562272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1562655Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1563050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:49:12.1563515Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:49:12.1563976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1564431Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1564912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:12.1565432Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:12.1565626Z 2025-08-14T21:49:12.1565744Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1566166Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1566526Z return mod(**inputs) 2025-08-14T21:49:12.1566926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1567349Z outputs = self.model( 2025-08-14T21:49:12.1567741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1568159Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1568593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1569027Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1569410Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1569812Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1570244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:49:12.1570692Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:49:12.1571169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1571635Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1572102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:12.1572604Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:12.1572783Z 2025-08-14T21:49:12.1572889Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1573115Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1573628Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1574025Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1574384Z return mod(**inputs) 2025-08-14T21:49:12.1574785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1575213Z outputs = self.model( 2025-08-14T21:49:12.1575616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1576045Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1576465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1576894Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1577276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1577680Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1578096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 504, in forward 2025-08-14T21:49:12.1578574Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:49:12.1578772Z 2025-08-14T21:49:12.1578860Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1579085Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1579304Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1579619Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1579852Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1580069Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1580293Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1580523Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1580771Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1581167Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1581525Z return mod(**inputs) 2025-08-14T21:49:12.1581924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1582348Z outputs = self.model( 2025-08-14T21:49:12.1582750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1583183Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1583592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1584015Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1584398Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1584791Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1585210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:49:12.1585672Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:12.1586115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1586607Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1587105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:12.1587648Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:12.1587848Z 2025-08-14T21:49:12.1587988Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1588378Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1588732Z return mod(**inputs) 2025-08-14T21:49:12.1589134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1589552Z outputs = self.model( 2025-08-14T21:49:12.1589944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1590371Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1590790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1591215Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1591594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1591991Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1592421Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:49:12.1592868Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:12.1593314Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1593762Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1594248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:12.1594740Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:12.1594926Z 2025-08-14T21:49:12.1595018Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1595251Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1595471Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1595699Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1595922Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1596147Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1596364Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1596593Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1596849Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1597245Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1597592Z return mod(**inputs) 2025-08-14T21:49:12.1597986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1598387Z outputs = self.model( 2025-08-14T21:49:12.1598787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1599219Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1599638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1600053Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1600439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1600845Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1601309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:49:12.1601745Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:49:12.1602208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1602659Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1603134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:12.1603660Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:12.1603863Z 2025-08-14T21:49:12.1603977Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1604372Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1604722Z return mod(**inputs) 2025-08-14T21:49:12.1605128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1605543Z outputs = self.model( 2025-08-14T21:49:12.1605939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1606358Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1606824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1607246Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1607615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1608018Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1608455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:49:12.1608926Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:49:12.1609386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1609843Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1610330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:12.1610832Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:12.1611008Z 2025-08-14T21:49:12.1611096Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1611327Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1611587Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1611974Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1612344Z return mod(**inputs) 2025-08-14T21:49:12.1612747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1613167Z outputs = self.model( 2025-08-14T21:49:12.1613566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1613998Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1614418Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1614842Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1615226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1615627Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1616052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 504, in forward 2025-08-14T21:49:12.1616556Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:49:12.1616756Z 2025-08-14T21:49:12.1616845Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1617098Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1617317Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1617562Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1617786Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1618027Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1618246Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1618469Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1618727Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1619116Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1619554Z return mod(**inputs) 2025-08-14T21:49:12.1619976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1620412Z outputs = self.model( 2025-08-14T21:49:12.1620823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1621272Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1621703Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1622131Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1622527Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1622935Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1623376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:49:12.1623831Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:12.1624293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1624747Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1625236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:12.1625774Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:12.1625990Z 2025-08-14T21:49:12.1626107Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1626510Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1626868Z return mod(**inputs) 2025-08-14T21:49:12.1627278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1627720Z outputs = self.model( 2025-08-14T21:49:12.1628127Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1628554Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1628979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1629412Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1629808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1630199Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1630620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:49:12.1631064Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:12.1631500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1631969Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1632443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:12.1632973Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:12.1633174Z 2025-08-14T21:49:12.1633259Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1633519Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1633741Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1633951Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1634170Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1634389Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1634599Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1634819Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1635071Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1635458Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1635799Z return mod(**inputs) 2025-08-14T21:49:12.1636191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1636604Z outputs = self.model( 2025-08-14T21:49:12.1636988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1637405Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1637814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1638229Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1638594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1638981Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1639398Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:49:12.1639837Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:49:12.1640286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1640721Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1641190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:12.1641688Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:12.1642093Z 2025-08-14T21:49:12.1642212Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1642603Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1642951Z return mod(**inputs) 2025-08-14T21:49:12.1643346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1643784Z outputs = self.model( 2025-08-14T21:49:12.1644176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1644585Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1644993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1645411Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1645800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1646190Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1646678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:49:12.1647156Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:49:12.1647625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1648093Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1648607Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:12.1649116Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:12.1649295Z 2025-08-14T21:49:12.1649387Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1649626Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1649896Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1650120Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1650202Z return mod(**inputs) 2025-08-14T21:49:12.1650492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1650572Z outputs = self.model( 2025-08-14T21:49:12.1650860Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1650948Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1651239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1651324Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1651572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1651672Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1651953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 504, in forward 2025-08-14T21:49:12.1652089Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:49:12.1652097Z 2025-08-14T21:49:12.1652192Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1652280Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1652374Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1652460Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1652547Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1652639Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1652725Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1652811Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1652938Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1653165Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1653241Z return mod(**inputs) 2025-08-14T21:49:12.1653534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1653614Z outputs = self.model( 2025-08-14T21:49:12.1653902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1653988Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1654270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1654361Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1654609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1654708Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1654996Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:49:12.1655130Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:12.1655413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1655538Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1655885Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:12.1656048Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:12.1656053Z 2025-08-14T21:49:12.1656168Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1656394Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1656465Z return mod(**inputs) 2025-08-14T21:49:12.1656741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1656825Z outputs = self.model( 2025-08-14T21:49:12.1657095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1657185Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1657458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1657539Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1657786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1657872Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1658146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:49:12.1658263Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:12.1658536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1658649Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1658964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:12.1659083Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:12.1659087Z 2025-08-14T21:49:12.1659184Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1659268Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1659357Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1659496Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1659589Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1659679Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1659760Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1659841Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1659964Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1660180Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1660254Z return mod(**inputs) 2025-08-14T21:49:12.1660542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1660618Z outputs = self.model( 2025-08-14T21:49:12.1660907Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1660989Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1661262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1661352Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1661593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1661704Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1662006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:49:12.1662145Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:49:12.1662445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1662551Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1662864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:12.1663016Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:12.1663020Z 2025-08-14T21:49:12.1663132Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1663361Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1663432Z return mod(**inputs) 2025-08-14T21:49:12.1663712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1663796Z outputs = self.model( 2025-08-14T21:49:12.1664074Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1664154Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1664438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1664515Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1664764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1664850Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1665130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:49:12.1665256Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:49:12.1665537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1665646Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1665953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:12.1666062Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:12.1666066Z 2025-08-14T21:49:12.1666158Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1666239Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1666346Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1666566Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1666635Z return mod(**inputs) 2025-08-14T21:49:12.1666917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1666988Z outputs = self.model( 2025-08-14T21:49:12.1667260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1667346Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1667615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1667699Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1667942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1668027Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1668332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 504, in forward 2025-08-14T21:49:12.1668480Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:49:12.1668484Z 2025-08-14T21:49:12.1668602Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1668695Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1668777Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1668883Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1668966Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1669048Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1669137Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1669220Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1669331Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1669561Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1669635Z return mod(**inputs) 2025-08-14T21:49:12.1669918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1669999Z outputs = self.model( 2025-08-14T21:49:12.1670267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1670354Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1670623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1670699Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1670945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1671031Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1671307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:49:12.1671412Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:12.1671680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1671792Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1672099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:12.1672238Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:12.1672251Z 2025-08-14T21:49:12.1672360Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1672569Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1672647Z return mod(**inputs) 2025-08-14T21:49:12.1672921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1672994Z outputs = self.model( 2025-08-14T21:49:12.1673286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1673365Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1673641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1673717Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1673951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1674043Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1674316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:49:12.1674425Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:12.1674738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1674854Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1675181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:12.1675322Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:12.1675326Z 2025-08-14T21:49:12.1675411Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1675502Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1675582Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1675662Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1675749Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1675828Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1675915Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1675997Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1676109Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1676335Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1676408Z return mod(**inputs) 2025-08-14T21:49:12.1676691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1676772Z outputs = self.model( 2025-08-14T21:49:12.1677039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1677126Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1677389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1677466Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1677709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1677793Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1678061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:49:12.1678183Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:49:12.1678448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1678555Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1678855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:12.1678989Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:12.1678993Z 2025-08-14T21:49:12.1679112Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1679321Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1679397Z return mod(**inputs) 2025-08-14T21:49:12.1679662Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1679735Z outputs = self.model( 2025-08-14T21:49:12.1680008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1680085Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1680357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1680443Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1680682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1680792Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1681071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:49:12.1681203Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:49:12.1681515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1681619Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1681941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:12.1682054Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:12.1682059Z 2025-08-14T21:49:12.1682144Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1682237Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1682348Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1682562Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1682642Z return mod(**inputs) 2025-08-14T21:49:12.1682921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1683001Z outputs = self.model( 2025-08-14T21:49:12.1683268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1683346Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1683626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1683705Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1683952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1684039Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1684310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 504, in forward 2025-08-14T21:49:12.1684448Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:49:12.1684454Z 2025-08-14T21:49:12.1684538Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1684622Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1684713Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1684796Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1684886Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1684966Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1685048Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1685135Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1685246Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1685464Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1685542Z return mod(**inputs) 2025-08-14T21:49:12.1685822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1685896Z outputs = self.model( 2025-08-14T21:49:12.1686178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1686259Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1686540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1686618Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1686859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1686952Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1687245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:49:12.1687382Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:12.1687654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1687809Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1688131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:12.1688273Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:12.1688277Z 2025-08-14T21:49:12.1688387Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1688613Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1688684Z return mod(**inputs) 2025-08-14T21:49:12.1688968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1689042Z outputs = self.model( 2025-08-14T21:49:12.1689316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1689406Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1689684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1689769Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1690008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1690093Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1690375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:49:12.1690481Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:12.1690759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1690871Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1691185Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:12.1691308Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:12.1691312Z 2025-08-14T21:49:12.1691399Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1691483Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1691574Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1691656Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1691739Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1691832Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1691914Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1692005Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1692117Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1692333Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1692412Z return mod(**inputs) 2025-08-14T21:49:12.1692689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1692762Z outputs = self.model( 2025-08-14T21:49:12.1693048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1693128Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1693412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1693506Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1693752Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1693903Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1694194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:49:12.1694326Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:49:12.1694605Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1694707Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1695023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:12.1695162Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:12.1695167Z 2025-08-14T21:49:12.1695279Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1695509Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1695582Z return mod(**inputs) 2025-08-14T21:49:12.1695869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1695942Z outputs = self.model( 2025-08-14T21:49:12.1696224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1696310Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1696590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1696670Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1696925Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1697014Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1697304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:49:12.1697422Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:49:12.1697702Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1697814Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1698134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:12.1698254Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:12.1698258Z 2025-08-14T21:49:12.1698344Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1698430Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1698549Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1698771Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1698845Z return mod(**inputs) 2025-08-14T21:49:12.1699138Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1699212Z outputs = self.model( 2025-08-14T21:49:12.1699654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1699741Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1700021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1700110Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1700379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1700492Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1700767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 504, in forward 2025-08-14T21:49:12.1700914Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:49:12.1700919Z 2025-08-14T21:49:12.1701028Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1701114Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1701195Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1701288Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1701370Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1701453Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1701543Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1701625Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1701746Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1701962Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1702036Z return mod(**inputs) 2025-08-14T21:49:12.1702317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1702392Z outputs = self.model( 2025-08-14T21:49:12.1702664Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1702753Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1703025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1703112Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1703349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1703437Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1703720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:49:12.1703828Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:12.1704111Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1704214Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1704526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:12.1704673Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:12.1704677Z 2025-08-14T21:49:12.1704790Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1705005Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1705084Z return mod(**inputs) 2025-08-14T21:49:12.1705362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1705444Z outputs = self.model( 2025-08-14T21:49:12.1705719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1705799Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1706083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1706162Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1706408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1706495Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1706789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:49:12.1706921Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:12.1707196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1707317Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1707654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:12.1707771Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:12.1707775Z 2025-08-14T21:49:12.1707873Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1707958Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1708042Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1708132Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1708215Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1708298Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1708389Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1708471Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1708593Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1708810Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1708884Z return mod(**inputs) 2025-08-14T21:49:12.1709169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1709242Z outputs = self.model( 2025-08-14T21:49:12.1709527Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1709615Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1709907Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1709994Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1710243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1710330Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1710618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:49:12.1710735Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:49:12.1711007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1711117Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1711440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:12.1711589Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:12.1711594Z 2025-08-14T21:49:12.1711705Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1711921Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1712002Z return mod(**inputs) 2025-08-14T21:49:12.1712290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1712371Z outputs = self.model( 2025-08-14T21:49:12.1712658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1712738Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1713028Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1713125Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1713369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1713484Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1713773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:49:12.1713915Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:49:12.1714204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1714308Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1714639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:12.1714756Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:12.1714760Z 2025-08-14T21:49:12.1714856Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1714941Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1715053Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1715279Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1715355Z return mod(**inputs) 2025-08-14T21:49:12.1715641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1715723Z outputs = self.model( 2025-08-14T21:49:12.1715999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1716086Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1716371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1716451Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1716700Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1716787Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1717075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 504, in forward 2025-08-14T21:49:12.1717206Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:49:12.1717210Z 2025-08-14T21:49:12.1717294Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1717383Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1717466Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1717547Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1717637Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1717718Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1717801Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1717892Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1718004Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1718230Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1718302Z return mod(**inputs) 2025-08-14T21:49:12.1718589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1718673Z outputs = self.model( 2025-08-14T21:49:12.1718956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1719034Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1719322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1719400Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1719683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1719786Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1720065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:49:12.1720198Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:12.1720488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1720600Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1720914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:12.1721054Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:12.1721058Z 2025-08-14T21:49:12.1721181Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1721397Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1721474Z return mod(**inputs) 2025-08-14T21:49:12.1721760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1721836Z outputs = self.model( 2025-08-14T21:49:12.1722125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1722208Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1722482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1722570Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1722812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1722907Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1723183Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:49:12.1723293Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:12.1723577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1723682Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1723997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:12.1724121Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:12.1724125Z 2025-08-14T21:49:12.1724212Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1724303Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1724387Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1724471Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1724561Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1724645Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1724724Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1724818Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1724928Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1725156Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1725227Z return mod(**inputs) 2025-08-14T21:49:12.1725506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1725584Z outputs = self.model( 2025-08-14T21:49:12.1725859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1725954Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1726238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1726334Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1726601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1726703Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1726981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:49:12.1727106Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:49:12.1727381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1727484Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1727807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:12.1727947Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:12.1727950Z 2025-08-14T21:49:12.1728069Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1728290Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1728362Z return mod(**inputs) 2025-08-14T21:49:12.1728654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1728728Z outputs = self.model( 2025-08-14T21:49:12.1729032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1729112Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1729413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1729501Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1729747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1729835Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1730123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:49:12.1730239Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:49:12.1730526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1730628Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1730949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:12.1731075Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:12.1731078Z 2025-08-14T21:49:12.1731166Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1731258Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1731373Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1731594Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1731675Z return mod(**inputs) 2025-08-14T21:49:12.1731958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1732033Z outputs = self.model( 2025-08-14T21:49:12.1732325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1732406Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1732716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1732798Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1733054Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1733170Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1733458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 504, in forward 2025-08-14T21:49:12.1733597Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:49:12.1733604Z 2025-08-14T21:49:12.1733687Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1733769Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1733859Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1733949Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1734030Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1734120Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1734201Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1734284Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1734403Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1734624Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1734703Z return mod(**inputs) 2025-08-14T21:49:12.1734980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1735055Z outputs = self.model( 2025-08-14T21:49:12.1735340Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1735420Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1735694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1735781Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1736025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1736122Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1736402Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:49:12.1736512Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:12.1736796Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1736899Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1737223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:12.1737365Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:12.1737371Z 2025-08-14T21:49:12.1737482Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1737711Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1737786Z return mod(**inputs) 2025-08-14T21:49:12.1738066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1738148Z outputs = self.model( 2025-08-14T21:49:12.1738425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1738513Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1738788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1738866Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1739133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1739220Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1739602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:49:12.1739742Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:12.1740046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1740160Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1740476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:12.1740593Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:12.1740606Z 2025-08-14T21:49:12.1740694Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1740781Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1740873Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1740956Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1741038Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1741129Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1741209Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1741290Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1741412Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1741629Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1741709Z return mod(**inputs) 2025-08-14T21:49:12.1742215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1742985Z outputs = self.model( 2025-08-14T21:49:12.1743310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1743395Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1743682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1743771Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1744019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1744114Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1744392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:49:12.1744510Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:49:12.1744792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1744897Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1745219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:12.1745362Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:12.1745368Z 2025-08-14T21:49:12.1745482Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1745709Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1745783Z return mod(**inputs) 2025-08-14T21:49:12.1746089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1746173Z outputs = self.model( 2025-08-14T21:49:12.1746449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1746538Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1746923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1747045Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1747294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1747427Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1747740Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:49:12.1747870Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:49:12.1748143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1748255Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1748570Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:12.1748688Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:12.1748695Z 2025-08-14T21:49:12.1748791Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1748878Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1748995Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1749216Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1749287Z return mod(**inputs) 2025-08-14T21:49:12.1749574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1749648Z outputs = self.model( 2025-08-14T21:49:12.1749923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1750013Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1750285Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1750372Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1750613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1750697Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1750981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 504, in forward 2025-08-14T21:49:12.1751110Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:49:12.1751113Z 2025-08-14T21:49:12.1751206Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1751289Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1751372Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1751464Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1751546Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1751627Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1751718Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1751800Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1751911Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1752134Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1752207Z return mod(**inputs) 2025-08-14T21:49:12.1752490Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1752565Z outputs = self.model( 2025-08-14T21:49:12.1752850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1752937Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1753236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1753331Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1753577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1753680Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1753975Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:49:12.1754084Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:12.1754356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1754467Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1754781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:12.1754930Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:12.1754935Z 2025-08-14T21:49:12.1755049Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1755269Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1755352Z return mod(**inputs) 2025-08-14T21:49:12.1755643Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1755716Z outputs = self.model( 2025-08-14T21:49:12.1756011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1756091Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1756377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1756455Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1756696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1756791Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1757068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:49:12.1757185Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:12.1757458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1757559Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1757880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:12.1757995Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:12.1757998Z 2025-08-14T21:49:12.1758094Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1758176Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1758259Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1758346Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1758429Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1758509Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1758597Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1758677Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1758789Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1759013Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1759083Z return mod(**inputs) 2025-08-14T21:49:12.1759379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1759453Z outputs = self.model( 2025-08-14T21:49:12.1759782Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1759890Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1760186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1760281Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1760546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1760634Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1760918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:49:12.1761036Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:49:12.1761313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1761426Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1761739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:12.1761889Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:12.1761893Z 2025-08-14T21:49:12.1762006Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1762222Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1762300Z return mod(**inputs) 2025-08-14T21:49:12.1762577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1762651Z outputs = self.model( 2025-08-14T21:49:12.1762961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1763042Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1763330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1763411Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1763658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1763751Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1764025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:49:12.1764149Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:49:12.1764450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1764554Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1764877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:12.1764993Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:12.1764999Z 2025-08-14T21:49:12.1765084Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1765176Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1765289Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1765514Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1765585Z return mod(**inputs) 2025-08-14T21:49:12.1765884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1765964Z outputs = self.model( 2025-08-14T21:49:12.1766256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1766340Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1766647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1766753Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1767020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1767106Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1767383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 504, in forward 2025-08-14T21:49:12.1767518Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:49:12.1767522Z 2025-08-14T21:49:12.1767609Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1767701Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1767783Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1767866Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1767959Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1768043Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1768123Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1768216Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1768327Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1768545Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1768625Z return mod(**inputs) 2025-08-14T21:49:12.1768903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1768986Z outputs = self.model( 2025-08-14T21:49:12.1769263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1769344Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1769628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1769708Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1769951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1770047Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1770324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:49:12.1770438Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:12.1770738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1770841Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1771169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:12.1771312Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:12.1771316Z 2025-08-14T21:49:12.1771434Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1771651Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1771727Z return mod(**inputs) 2025-08-14T21:49:12.1772016Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1772090Z outputs = self.model( 2025-08-14T21:49:12.1772366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1772454Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1772750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1772839Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1773108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1773213Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1773506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:49:12.1773617Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:12.1773904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1774011Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1774329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:12.1774462Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:12.1774466Z 2025-08-14T21:49:12.1774558Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1774645Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1774739Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1774826Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1774920Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1775007Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1775093Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1775186Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1775302Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1775526Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1775611Z return mod(**inputs) 2025-08-14T21:49:12.1775894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1775981Z outputs = self.model( 2025-08-14T21:49:12.1776264Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1776350Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1776639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1776723Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1776968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1777068Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1777351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:49:12.1777480Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:49:12.1777767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1777877Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1778205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:12.1778353Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:12.1778357Z 2025-08-14T21:49:12.1778483Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1778704Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1778781Z return mod(**inputs) 2025-08-14T21:49:12.1779068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1779145Z outputs = self.model( 2025-08-14T21:49:12.1779566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1779705Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1779980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1780092Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1780362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1780451Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1780730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:49:12.1780847Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:49:12.1781128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1781233Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1781548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:12.1781674Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:12.1781679Z 2025-08-14T21:49:12.1781766Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1781852Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1781976Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1782195Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1782273Z return mod(**inputs) 2025-08-14T21:49:12.1782546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1782618Z outputs = self.model( 2025-08-14T21:49:12.1782899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1782985Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1783261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1783348Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1783586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1783680Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1783950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 504, in forward 2025-08-14T21:49:12.1784079Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:49:12.1784083Z 2025-08-14T21:49:12.1784175Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1784259Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1784349Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1784431Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1784512Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1784604Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1784684Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1784764Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1784885Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1785102Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1785173Z return mod(**inputs) 2025-08-14T21:49:12.1785456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1785528Z outputs = self.model( 2025-08-14T21:49:12.1785833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1785917Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1786225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1786332Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1786589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1786677Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1786961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:49:12.1787069Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:12.1787354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1787458Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1787771Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:12.1787921Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:12.1787926Z 2025-08-14T21:49:12.1788036Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1788263Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1788338Z return mod(**inputs) 2025-08-14T21:49:12.1788615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1788695Z outputs = self.model( 2025-08-14T21:49:12.1788969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1789049Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1789331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1789410Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1789655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1789742Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1790017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-08-14T21:49:12.1790133Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:12.1790408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1790515Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1790831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:12.1790945Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:12.1790951Z 2025-08-14T21:49:12.1791045Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1791133Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1791217Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1791307Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1791391Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1791481Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1791562Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1791643Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1791763Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1791980Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1792052Z return mod(**inputs) 2025-08-14T21:49:12.1792360Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1792460Z outputs = self.model( 2025-08-14T21:49:12.1792752Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1792851Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1793187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1793279Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1793528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1793614Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1793903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:49:12.1794022Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:49:12.1794317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1794424Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1794739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:12.1794888Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:12.1794892Z 2025-08-14T21:49:12.1795005Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1795228Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1795300Z return mod(**inputs) 2025-08-14T21:49:12.1795578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1795663Z outputs = self.model( 2025-08-14T21:49:12.1795942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1796025Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1796309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1796390Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1796652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1796737Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1797019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-08-14T21:49:12.1797142Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:49:12.1797420Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-08-14T21:49:12.1797531Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:12.1797841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:12.1797957Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:12.1797964Z 2025-08-14T21:49:12.1798057Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1798142Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1798254Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1798479Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1798549Z return mod(**inputs) 2025-08-14T21:49:12.1798828Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-08-14T21:49:12.1798940Z outputs = self.model( 2025-08-14T21:49:12.1799230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-08-14T21:49:12.1799344Z decoder_outputs = self.decoder( 2025-08-14T21:49:12.1799634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-08-14T21:49:12.1799729Z layer_outputs = decoder_layer( 2025-08-14T21:49:12.1799993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:12.1800079Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:12.1800381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 504, in forward 2025-08-14T21:49:12.1800511Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:49:12.1800516Z 2025-08-14T21:49:12.1800603Z cudagraph partition due to non gpu ops 2025-08-14T21:49:12.1800723Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1800943Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1801025Z return mod(**inputs) 2025-08-14T21:49:12.1801313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1422, in forward 2025-08-14T21:49:12.1801402Z lm_logits = self.lm_head(outputs[0]) 2025-08-14T21:49:12.1801406Z 2025-08-14T21:49:12.1801526Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:12.1801742Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:12.1801813Z return mod(**inputs) 2025-08-14T21:49:12.1802109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1429, in forward 2025-08-14T21:49:12.1802300Z masked_lm_loss = loss_fct(lm_logits.view(-1, self.config.vocab_size), labels.view(-1)) 2025-08-14T21:49:12.1802306Z 2025-08-14T21:49:25.2951665Z Compilation time (from dynamo_timed): 32.744513747 2025-08-14T21:49:25.3039427Z pass 2025-08-14T21:49:25.3039939Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:49:25.3040858Z TIMING: _recursive_pre_grad_passes:0.09474 _recursive_joint_graph_passes:1.1761 _recursive_post_grad_passes:0.15577 async_compile.wait:0.81891 code_gen:12.38982 inductor_compile:16.31707 backend_compile:26.98845 gc:0.00046 entire_frame_compile:32.74451 total_wall_time:32.74451 2025-08-14T21:49:25.3042035Z STATS: call_* op count: 1014 | FakeTensorMode.__torch_dispatch__:62443 | FakeTensor.__torch_dispatch__:9034 | ProxyTorchDispatchMode.__torch_dispatch__:13993 2025-08-14T21:49:25.3042651Z Dynamo produced 1 graphs covering 1014 ops with 0 graph breaks (0 unique) 2025-08-14T21:49:31.5659097Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:49:31.5660216Z from pkg_resources import resource_filename 2025-08-14T21:49:32.1733268Z 2025-08-14T21:49:35.0448784Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:49:35.0449101Z loading model: 0it [00:02, ?it/s] 2025-08-14T21:49:35.0476201Z cpu eval MBartForCausalLM 2025-08-14T21:49:36.7635613Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:49:37.3857574Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:49:38.0904436Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:49:47.7284900Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7285941Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7286631Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7287009Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7287452Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7287775Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7288096Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7289114Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7289582Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7289858Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7290210Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7290964Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7291419Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7291732Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7292008Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7292324Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7292683Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7293380Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7294323Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7294838Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:47.7295310Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:47.7295835Z return mod(**inputs) 2025-08-14T21:49:47.7296326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:49:47.7296887Z outputs = self.model.decoder( 2025-08-14T21:49:47.7297417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:49:47.7297981Z layer_outputs = decoder_layer( 2025-08-14T21:49:47.7298846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:47.7299317Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:47.7299999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:49:47.7305135Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:47.7305693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:49:47.7306189Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:47.7306681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:47.7307213Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:47.7307435Z 2025-08-14T21:49:47.7307554Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:47.7308083Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:47.7308464Z return mod(**inputs) 2025-08-14T21:49:47.7308872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:49:47.7309382Z outputs = self.model.decoder( 2025-08-14T21:49:47.7309856Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:49:47.7310279Z layer_outputs = decoder_layer( 2025-08-14T21:49:47.7310661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:47.7311055Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:47.7311478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:49:47.7312053Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:47.7312492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:49:47.7312987Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:47.7313496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:47.7314009Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:47.7314198Z 2025-08-14T21:49:47.7314290Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7314519Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7314774Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:47.7315174Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:47.7315530Z return mod(**inputs) 2025-08-14T21:49:47.7315933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:49:47.7316361Z outputs = self.model.decoder( 2025-08-14T21:49:47.7316789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:49:47.7317213Z layer_outputs = decoder_layer( 2025-08-14T21:49:47.7317599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:47.7317989Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:47.7318406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:49:47.7318873Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:49:47.7319286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:49:47.7319656Z return self.act(input) 2025-08-14T21:49:47.7319787Z 2025-08-14T21:49:47.7319874Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7320106Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7320323Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7320540Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7320768Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7320978Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7321197Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7321420Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7321668Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:47.7322057Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:47.7322406Z return mod(**inputs) 2025-08-14T21:49:47.7322800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:49:47.7323212Z outputs = self.model.decoder( 2025-08-14T21:49:47.7323611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:49:47.7324011Z layer_outputs = decoder_layer( 2025-08-14T21:49:47.7324357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:47.7324724Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:47.7325122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:49:47.7325538Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:47.7325966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:49:47.7326409Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:47.7326917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:47.7327450Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:47.7327645Z 2025-08-14T21:49:47.7327755Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:47.7328166Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:47.7328535Z return mod(**inputs) 2025-08-14T21:49:47.7328918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:49:47.7329307Z outputs = self.model.decoder( 2025-08-14T21:49:47.7329692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:49:47.7330087Z layer_outputs = decoder_layer( 2025-08-14T21:49:47.7330455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:47.7330842Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:47.7331262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:49:47.7331705Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:47.7332133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:49:47.7332573Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:47.7333043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:47.7333521Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:47.7333702Z 2025-08-14T21:49:47.7333787Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7334013Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7334268Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:47.7334646Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:47.7334997Z return mod(**inputs) 2025-08-14T21:49:47.7335391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:49:47.7335802Z outputs = self.model.decoder( 2025-08-14T21:49:47.7336213Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:49:47.7336628Z layer_outputs = decoder_layer( 2025-08-14T21:49:47.7336998Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:47.7337377Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:47.7337796Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:49:47.7338254Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:49:47.7338670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:49:47.7339032Z return self.act(input) 2025-08-14T21:49:47.7339158Z 2025-08-14T21:49:47.7339247Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7339631Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7339876Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7340108Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7340337Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7340555Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7340788Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7341017Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7341305Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:47.7341732Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:47.7342485Z return mod(**inputs) 2025-08-14T21:49:47.7343020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:49:47.7343492Z outputs = self.model.decoder( 2025-08-14T21:49:47.7344010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:49:47.7344451Z layer_outputs = decoder_layer( 2025-08-14T21:49:47.7344836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:47.7345238Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:47.7345677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:49:47.7346159Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:47.7346624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:49:47.7347079Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:47.7347579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:47.7348115Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:47.7348323Z 2025-08-14T21:49:47.7348448Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:47.7348889Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:47.7349269Z return mod(**inputs) 2025-08-14T21:49:47.7349679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:49:47.7350104Z outputs = self.model.decoder( 2025-08-14T21:49:47.7350558Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:49:47.7350999Z layer_outputs = decoder_layer( 2025-08-14T21:49:47.7351382Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:47.7351782Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:47.7352220Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:49:47.7352690Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:47.7353142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:49:47.7353606Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:47.7354092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:47.7354596Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:47.7354775Z 2025-08-14T21:49:47.7354975Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7355205Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7355460Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:47.7355869Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:47.7356221Z return mod(**inputs) 2025-08-14T21:49:47.7356618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:49:47.7357043Z outputs = self.model.decoder( 2025-08-14T21:49:47.7357441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:49:47.7357855Z layer_outputs = decoder_layer( 2025-08-14T21:49:47.7358299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:47.7358732Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:47.7359143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:49:47.7359634Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:49:47.7360072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:49:47.7360452Z return self.act(input) 2025-08-14T21:49:47.7360584Z 2025-08-14T21:49:47.7360669Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7360896Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7361123Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7361340Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7361562Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7361787Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7362000Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7362222Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7362472Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:47.7362861Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:47.7363217Z return mod(**inputs) 2025-08-14T21:49:47.7363615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:49:47.7364034Z outputs = self.model.decoder( 2025-08-14T21:49:47.7364433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:49:47.7364851Z layer_outputs = decoder_layer( 2025-08-14T21:49:47.7365228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:47.7365628Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:47.7366046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:49:47.7366498Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:47.7366939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:49:47.7367377Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:47.7367855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:47.7368373Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:47.7368568Z 2025-08-14T21:49:47.7368686Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:47.7369070Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:47.7369420Z return mod(**inputs) 2025-08-14T21:49:47.7369820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:49:47.7370233Z outputs = self.model.decoder( 2025-08-14T21:49:47.7370647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:49:47.7371061Z layer_outputs = decoder_layer( 2025-08-14T21:49:47.7371436Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:47.7371832Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:47.7372250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:49:47.7372699Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:47.7373201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:49:47.7373665Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:47.7374140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:47.7374666Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:47.7374866Z 2025-08-14T21:49:47.7374956Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7375189Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7375444Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:47.7375849Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:47.7376208Z return mod(**inputs) 2025-08-14T21:49:47.7376606Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:49:47.7377041Z outputs = self.model.decoder( 2025-08-14T21:49:47.7377449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:49:47.7377883Z layer_outputs = decoder_layer( 2025-08-14T21:49:47.7378261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:47.7378657Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:47.7379065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:49:47.7379629Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:49:47.7380065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:49:47.7380427Z return self.act(input) 2025-08-14T21:49:47.7380560Z 2025-08-14T21:49:47.7380650Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7380880Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7381109Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7381324Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7381550Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7381777Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7381989Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7382215Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7382465Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:47.7382845Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:47.7383198Z return mod(**inputs) 2025-08-14T21:49:47.7383593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:49:47.7384012Z outputs = self.model.decoder( 2025-08-14T21:49:47.7384419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:49:47.7384836Z layer_outputs = decoder_layer( 2025-08-14T21:49:47.7385210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:47.7385604Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:47.7386013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:49:47.7386456Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:47.7386906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:49:47.7387341Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:47.7387867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:47.7388393Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:47.7388610Z 2025-08-14T21:49:47.7388732Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:47.7389141Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:47.7389494Z return mod(**inputs) 2025-08-14T21:49:47.7389914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:49:47.7390350Z outputs = self.model.decoder( 2025-08-14T21:49:47.7390762Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:49:47.7391188Z layer_outputs = decoder_layer( 2025-08-14T21:49:47.7391571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:47.7391968Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:47.7392408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:49:47.7392880Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:47.7393330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:49:47.7393800Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:47.7394338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:47.7394849Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:47.7395026Z 2025-08-14T21:49:47.7395120Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7395343Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7395599Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:47.7396005Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:47.7396365Z return mod(**inputs) 2025-08-14T21:49:47.7396766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:49:47.7397209Z outputs = self.model.decoder( 2025-08-14T21:49:47.7397635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:49:47.7398074Z layer_outputs = decoder_layer( 2025-08-14T21:49:47.7398460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:47.7398851Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:47.7399249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:49:47.7399704Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:49:47.7400112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:49:47.7400469Z return self.act(input) 2025-08-14T21:49:47.7400585Z 2025-08-14T21:49:47.7400664Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7400885Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7401103Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7401307Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7401517Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7401730Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7401935Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7402148Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7402387Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:47.7402816Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:47.7403145Z return mod(**inputs) 2025-08-14T21:49:47.7403511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:49:47.7403929Z outputs = self.model.decoder( 2025-08-14T21:49:47.7404365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:49:47.7404787Z layer_outputs = decoder_layer( 2025-08-14T21:49:47.7405148Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:47.7405516Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:47.7405900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:49:47.7406323Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:47.7406742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:49:47.7407150Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:47.7407600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:47.7408087Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:47.7408273Z 2025-08-14T21:49:47.7408385Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:47.7408743Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:47.7409072Z return mod(**inputs) 2025-08-14T21:49:47.7409442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:49:47.7409839Z outputs = self.model.decoder( 2025-08-14T21:49:47.7410215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:49:47.7410609Z layer_outputs = decoder_layer( 2025-08-14T21:49:47.7410960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:47.7411324Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:47.7411727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:49:47.7412141Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:47.7412551Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:49:47.7412957Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:47.7413431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:47.7413901Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:47.7414066Z 2025-08-14T21:49:47.7414154Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7414362Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7414600Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:47.7414961Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:47.7415285Z return mod(**inputs) 2025-08-14T21:49:47.7415654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:49:47.7416051Z outputs = self.model.decoder( 2025-08-14T21:49:47.7416438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:49:47.7416821Z layer_outputs = decoder_layer( 2025-08-14T21:49:47.7417207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:47.7417576Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:47.7417985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:49:47.7418445Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:49:47.7419776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:49:47.7420173Z return self.act(input) 2025-08-14T21:49:47.7420294Z 2025-08-14T21:49:47.7420383Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7420617Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7420845Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7421063Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7421292Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7421518Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7421737Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7421959Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7422216Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:47.7422608Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:47.7422953Z return mod(**inputs) 2025-08-14T21:49:47.7423351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:49:47.7423775Z outputs = self.model.decoder( 2025-08-14T21:49:47.7424184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:49:47.7424608Z layer_outputs = decoder_layer( 2025-08-14T21:49:47.7424989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:47.7425388Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:47.7425805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:49:47.7426259Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:47.7426704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:49:47.7427153Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:47.7427627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:47.7428150Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:47.7428348Z 2025-08-14T21:49:47.7428471Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:47.7428859Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:47.7429218Z return mod(**inputs) 2025-08-14T21:49:47.7429616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:49:47.7430041Z outputs = self.model.decoder( 2025-08-14T21:49:47.7430449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:49:47.7430873Z layer_outputs = decoder_layer( 2025-08-14T21:49:47.7431306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:47.7431685Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:47.7432102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:49:47.7432536Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:47.7433008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:49:47.7433458Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:47.7433956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:47.7434476Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:47.7434670Z 2025-08-14T21:49:47.7434793Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7435013Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7435267Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:47.7435660Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:47.7436006Z return mod(**inputs) 2025-08-14T21:49:47.7436395Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:49:47.7436812Z outputs = self.model.decoder( 2025-08-14T21:49:47.7437199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:49:47.7437584Z layer_outputs = decoder_layer( 2025-08-14T21:49:47.7437935Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:47.7438298Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:47.7438681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:49:47.7439114Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:49:47.7439509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:49:47.7439853Z return self.act(input) 2025-08-14T21:49:47.7439962Z 2025-08-14T21:49:47.7440041Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7440254Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7440463Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7440664Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7440869Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7441076Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7441285Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7441483Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7441717Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:47.7442313Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:47.7442641Z return mod(**inputs) 2025-08-14T21:49:47.7443018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:49:47.7443435Z outputs = self.model.decoder( 2025-08-14T21:49:47.7443837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:49:47.7444262Z layer_outputs = decoder_layer( 2025-08-14T21:49:47.7444636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:47.7445029Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:47.7445440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:49:47.7445884Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:47.7446332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:49:47.7446749Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:47.7447184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:47.7447746Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:47.7447933Z 2025-08-14T21:49:47.7448087Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:47.7448442Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:47.7448815Z return mod(**inputs) 2025-08-14T21:49:47.7449237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:49:47.7449655Z outputs = self.model.decoder( 2025-08-14T21:49:47.7450052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:49:47.7450465Z layer_outputs = decoder_layer( 2025-08-14T21:49:47.7450836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:47.7451219Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:47.7451640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:49:47.7452058Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:47.7452469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:49:47.7452886Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:47.7453355Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:47.7453843Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:47.7454013Z 2025-08-14T21:49:47.7454106Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7454325Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7454575Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:47.7454961Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:47.7455319Z return mod(**inputs) 2025-08-14T21:49:47.7455706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:49:47.7456135Z outputs = self.model.decoder( 2025-08-14T21:49:47.7456537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:49:47.7456953Z layer_outputs = decoder_layer( 2025-08-14T21:49:47.7457321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:47.7457710Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:47.7458112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:49:47.7458579Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:49:47.7458998Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:49:47.7459366Z return self.act(input) 2025-08-14T21:49:47.7459561Z 2025-08-14T21:49:47.7459659Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7459898Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7460132Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7460355Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7460585Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7460816Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7461053Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7461266Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7461526Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:47.7461925Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:47.7462314Z return mod(**inputs) 2025-08-14T21:49:47.7462709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:49:47.7463183Z outputs = self.model.decoder( 2025-08-14T21:49:47.7463615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:49:47.7464035Z layer_outputs = decoder_layer( 2025-08-14T21:49:47.7464434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:47.7464825Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:47.7465236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:49:47.7465672Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:47.7466105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:49:47.7466542Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:47.7467005Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:47.7467515Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:47.7467707Z 2025-08-14T21:49:47.7467827Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:47.7468209Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:47.7468544Z return mod(**inputs) 2025-08-14T21:49:47.7468935Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:49:47.7469350Z outputs = self.model.decoder( 2025-08-14T21:49:47.7469747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:49:47.7470160Z layer_outputs = decoder_layer( 2025-08-14T21:49:47.7470532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:47.7470916Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:47.7471320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:49:47.7471761Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:47.7472189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:49:47.7472585Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:47.7473011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:47.7473454Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:47.7473608Z 2025-08-14T21:49:47.7473693Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7473895Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7474125Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:47.7474477Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:47.7474789Z return mod(**inputs) 2025-08-14T21:49:47.7475146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:49:47.7475531Z outputs = self.model.decoder( 2025-08-14T21:49:47.7475897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:49:47.7476267Z layer_outputs = decoder_layer( 2025-08-14T21:49:47.7476634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:47.7476995Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:47.7477401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:49:47.7477844Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:49:47.7478241Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:49:47.7478578Z return self.act(input) 2025-08-14T21:49:47.7478687Z 2025-08-14T21:49:47.7478769Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7478983Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7479194Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7479403Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7479604Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7479813Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7480019Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7480220Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7480456Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:47.7480816Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:47.7481135Z return mod(**inputs) 2025-08-14T21:49:47.7481501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:49:47.7481905Z outputs = self.model.decoder( 2025-08-14T21:49:47.7482278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:49:47.7482657Z layer_outputs = decoder_layer( 2025-08-14T21:49:47.7483008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:47.7483367Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:47.7483747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:49:47.7484160Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:47.7484568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:49:47.7484975Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:47.7485403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:47.7485882Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:47.7486071Z 2025-08-14T21:49:47.7486176Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:47.7486535Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:47.7486850Z return mod(**inputs) 2025-08-14T21:49:47.7487213Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:49:47.7487603Z outputs = self.model.decoder( 2025-08-14T21:49:47.7487972Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:49:47.7488358Z layer_outputs = decoder_layer( 2025-08-14T21:49:47.7488708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:47.7489067Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:47.7489443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:49:47.7489849Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:47.7490252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:49:47.7490690Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:47.7491135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:47.7491607Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:47.7491766Z 2025-08-14T21:49:47.7491851Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7492069Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7492309Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:47.7492675Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:47.7493005Z return mod(**inputs) 2025-08-14T21:49:47.7493366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:49:47.7493760Z outputs = self.model.decoder( 2025-08-14T21:49:47.7494145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:49:47.7494532Z layer_outputs = decoder_layer( 2025-08-14T21:49:47.7494883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:47.7495250Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:47.7495643Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:49:47.7496071Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:49:47.7496465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:49:47.7496813Z return self.act(input) 2025-08-14T21:49:47.7496927Z 2025-08-14T21:49:47.7497009Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7497227Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7497441Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7497652Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7497858Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7498071Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7498284Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7498489Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7498736Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:47.7499110Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:47.7499532Z return mod(**inputs) 2025-08-14T21:49:47.7499941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:49:47.7500367Z outputs = self.model.decoder( 2025-08-14T21:49:47.7500780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:49:47.7501185Z layer_outputs = decoder_layer( 2025-08-14T21:49:47.7501555Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:47.7501948Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:47.7502350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:49:47.7502793Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:47.7503244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:49:47.7503681Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:47.7504143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:47.7504690Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:47.7504902Z 2025-08-14T21:49:47.7505008Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:47.7505395Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:47.7505742Z return mod(**inputs) 2025-08-14T21:49:47.7506173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:49:47.7506574Z outputs = self.model.decoder( 2025-08-14T21:49:47.7506955Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:49:47.7507351Z layer_outputs = decoder_layer( 2025-08-14T21:49:47.7507705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:47.7508075Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:47.7508466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:49:47.7508888Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:47.7509303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:49:47.7509722Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:47.7510165Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:47.7510632Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:47.7510796Z 2025-08-14T21:49:47.7510889Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7511104Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7511350Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:47.7511716Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:47.7512059Z return mod(**inputs) 2025-08-14T21:49:47.7512428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:49:47.7512828Z outputs = self.model.decoder( 2025-08-14T21:49:47.7513215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:49:47.7513604Z layer_outputs = decoder_layer( 2025-08-14T21:49:47.7513960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:47.7514332Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:47.7514733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:49:47.7515156Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:49:47.7515554Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:49:47.7515913Z return self.act(input) 2025-08-14T21:49:47.7516024Z 2025-08-14T21:49:47.7516110Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7516318Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7516531Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7516746Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7516949Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7517161Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7517370Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7517575Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7517816Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:47.7518185Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:47.7518512Z return mod(**inputs) 2025-08-14T21:49:47.7518931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:49:47.7519345Z outputs = self.model.decoder( 2025-08-14T21:49:47.7519729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:49:47.7520135Z layer_outputs = decoder_layer( 2025-08-14T21:49:47.7520507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:47.7520876Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:47.7521283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:49:47.7521683Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:47.7522088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:49:47.7522493Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:47.7522919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:49:47.7523393Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:49:47.7523581Z 2025-08-14T21:49:47.7523682Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:47.7524035Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:47.7524348Z return mod(**inputs) 2025-08-14T21:49:47.7524704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:49:47.7525087Z outputs = self.model.decoder( 2025-08-14T21:49:47.7525465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:49:47.7525850Z layer_outputs = decoder_layer( 2025-08-14T21:49:47.7526206Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:47.7526580Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:47.7526961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:49:47.7527368Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:49:47.7527770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:49:47.7528177Z attn_output, attn_weights = attention_interface( 2025-08-14T21:49:47.7528613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:49:47.7529073Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:49:47.7529237Z 2025-08-14T21:49:47.7529329Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7529565Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7529799Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:47.7530165Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:47.7530496Z return mod(**inputs) 2025-08-14T21:49:47.7530858Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-08-14T21:49:47.7531250Z outputs = self.model.decoder( 2025-08-14T21:49:47.7531643Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:49:47.7532037Z layer_outputs = decoder_layer( 2025-08-14T21:49:47.7532382Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:49:47.7532778Z return super().__call__(*args, **kwargs) 2025-08-14T21:49:47.7533173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:49:47.7533623Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:49:47.7534042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:49:47.7534406Z return self.act(input) 2025-08-14T21:49:47.7534521Z 2025-08-14T21:49:47.7534610Z cudagraph partition due to non gpu ops 2025-08-14T21:49:47.7534844Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:47.7535209Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:47.7535541Z return mod(**inputs) 2025-08-14T21:49:47.7535908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1880, in forward 2025-08-14T21:49:47.7536312Z logits = self.lm_head(outputs[0]) 2025-08-14T21:49:47.7536451Z 2025-08-14T21:49:47.7536557Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:49:47.7536923Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:49:47.7537245Z return mod(**inputs) 2025-08-14T21:49:47.7537612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1886, in forward 2025-08-14T21:49:47.7538081Z loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1)) 2025-08-14T21:49:47.7538279Z 2025-08-14T21:49:57.7801896Z Compilation time (from dynamo_timed): 17.777136481 2025-08-14T21:49:57.7984501Z pass 2025-08-14T21:49:57.7985188Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:49:57.7990198Z TIMING: _recursive_pre_grad_passes:0.03705 _recursive_joint_graph_passes:0.65405 _recursive_post_grad_passes:0.07721 async_compile.wait:0.7913 code_gen:9.41963 inductor_compile:11.17303 backend_compile:15.50594 gc:0.00032 entire_frame_compile:17.77714 total_wall_time:17.77714 2025-08-14T21:49:57.7991446Z STATS: call_* op count: 373 | FakeTensorMode.__torch_dispatch__:24996 | FakeTensor.__torch_dispatch__:4012 | ProxyTorchDispatchMode.__torch_dispatch__:5664 2025-08-14T21:49:57.7991959Z Dynamo produced 1 graphs covering 373 ops with 0 graph breaks (0 unique) 2025-08-14T21:50:03.5940820Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:50:03.5942104Z from pkg_resources import resource_filename 2025-08-14T21:50:04.1832751Z 2025-08-14T21:50:09.3713714Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:50:09.3718630Z loading model: 0it [00:05, ?it/s] 2025-08-14T21:50:09.3740280Z cpu eval MBartForConditionalGeneration 2025-08-14T21:50:12.7088980Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:50:13.9957906Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:50:15.3151773Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:50:36.6955766Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.6956311Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.6956682Z return mod(**inputs) 2025-08-14T21:50:36.6957151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1436, in forward 2025-08-14T21:50:36.6957681Z decoder_input_ids = shift_tokens_right(labels, self.config.pad_token_id) 2025-08-14T21:50:36.6958581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 76, in shift_tokens_right 2025-08-14T21:50:36.6959228Z index_of_eos = (prev_output_tokens.ne(pad_token_id).sum(dim=1) - 1).unsqueeze(-1) 2025-08-14T21:50:36.6959532Z 2025-08-14T21:50:36.6959627Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.6959867Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.6960140Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.6960359Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.6960586Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.6960817Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.6961071Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.6961300Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.6961544Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.6961783Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.6962015Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.6962249Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.6962467Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.6962685Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.6962910Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.6963144Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.6963379Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.6963601Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.6963826Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.6964077Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.6964474Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.6964839Z return mod(**inputs) 2025-08-14T21:50:36.6965242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.6975925Z outputs = self.model( 2025-08-14T21:50:36.6976469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:50:36.6976954Z encoder_outputs = self.encoder( 2025-08-14T21:50:36.6977406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:50:36.6977848Z layer_outputs = encoder_layer( 2025-08-14T21:50:36.6978250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.6978665Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.6979097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:50:36.6979838Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:36.6980293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.6980785Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.6981279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:36.6981824Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:36.6982032Z 2025-08-14T21:50:36.6982165Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.6982581Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.6982944Z return mod(**inputs) 2025-08-14T21:50:36.6983373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.6983823Z outputs = self.model( 2025-08-14T21:50:36.6984221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:50:36.6984779Z encoder_outputs = self.encoder( 2025-08-14T21:50:36.6985209Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:50:36.6985687Z layer_outputs = encoder_layer( 2025-08-14T21:50:36.6986118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.6986615Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.6987061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:50:36.6987507Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:36.6987962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.6988417Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.6988911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:36.6989414Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:36.6989602Z 2025-08-14T21:50:36.6989697Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.6989941Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.6990209Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.6990609Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.6990973Z return mod(**inputs) 2025-08-14T21:50:36.6991394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.6991823Z outputs = self.model( 2025-08-14T21:50:36.6992254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:50:36.6992697Z encoder_outputs = self.encoder( 2025-08-14T21:50:36.6993121Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:50:36.6993543Z layer_outputs = encoder_layer( 2025-08-14T21:50:36.6993936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.6994346Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.6994790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 332, in forward 2025-08-14T21:50:36.6995276Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:36.6995709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:50:36.6996093Z return self.act(input) 2025-08-14T21:50:36.6996357Z 2025-08-14T21:50:36.6996444Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.6996679Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.6996906Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.6997123Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.6997347Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.6997578Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.6997940Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.6998171Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.6998438Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.6998843Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.6999195Z return mod(**inputs) 2025-08-14T21:50:36.6999602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7000010Z outputs = self.model( 2025-08-14T21:50:36.7000430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:50:36.7000850Z encoder_outputs = self.encoder( 2025-08-14T21:50:36.7001292Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:50:36.7001756Z layer_outputs = encoder_layer( 2025-08-14T21:50:36.7002154Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7002553Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7002995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:50:36.7003451Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:36.7003896Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7004372Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7004871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:36.7005414Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:36.7005620Z 2025-08-14T21:50:36.7005736Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7006145Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7006514Z return mod(**inputs) 2025-08-14T21:50:36.7006931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7007374Z outputs = self.model( 2025-08-14T21:50:36.7007799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:50:36.7008249Z encoder_outputs = self.encoder( 2025-08-14T21:50:36.7008677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:50:36.7009121Z layer_outputs = encoder_layer( 2025-08-14T21:50:36.7009511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7009914Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7010353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:50:36.7010805Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:36.7011255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7011706Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7012204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:36.7012719Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:36.7012900Z 2025-08-14T21:50:36.7013002Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7013231Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7013498Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7013908Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7014272Z return mod(**inputs) 2025-08-14T21:50:36.7014683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7015115Z outputs = self.model( 2025-08-14T21:50:36.7015523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:50:36.7015957Z encoder_outputs = self.encoder( 2025-08-14T21:50:36.7016416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:50:36.7016859Z layer_outputs = encoder_layer( 2025-08-14T21:50:36.7017233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7017657Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7018110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 332, in forward 2025-08-14T21:50:36.7018582Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:36.7019000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:50:36.7019372Z return self.act(input) 2025-08-14T21:50:36.7019576Z 2025-08-14T21:50:36.7019674Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7019900Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7020133Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7020361Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7020588Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7020806Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7021047Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7021274Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7021531Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7021934Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7022296Z return mod(**inputs) 2025-08-14T21:50:36.7022689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7023110Z outputs = self.model( 2025-08-14T21:50:36.7023510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:50:36.7023938Z encoder_outputs = self.encoder( 2025-08-14T21:50:36.7024346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:50:36.7024776Z layer_outputs = encoder_layer( 2025-08-14T21:50:36.7025162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7025570Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7025988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:50:36.7026430Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:36.7026870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7027314Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7027801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:36.7028330Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:36.7028533Z 2025-08-14T21:50:36.7028657Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7029044Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7029407Z return mod(**inputs) 2025-08-14T21:50:36.7029808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7030242Z outputs = self.model( 2025-08-14T21:50:36.7030635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:50:36.7031071Z encoder_outputs = self.encoder( 2025-08-14T21:50:36.7031521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:50:36.7031940Z layer_outputs = encoder_layer( 2025-08-14T21:50:36.7032359Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7032812Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7033260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:50:36.7033703Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:36.7034151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7034608Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7035103Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:36.7035602Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:36.7035792Z 2025-08-14T21:50:36.7035888Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7036130Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7036388Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7036797Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7037164Z return mod(**inputs) 2025-08-14T21:50:36.7037584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7038012Z outputs = self.model( 2025-08-14T21:50:36.7038432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:50:36.7038870Z encoder_outputs = self.encoder( 2025-08-14T21:50:36.7039290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:50:36.7039730Z layer_outputs = encoder_layer( 2025-08-14T21:50:36.7040128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7040546Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7040975Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 332, in forward 2025-08-14T21:50:36.7041454Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:36.7042246Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:50:36.7042629Z return self.act(input) 2025-08-14T21:50:36.7042753Z 2025-08-14T21:50:36.7042845Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7043078Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7043311Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7043533Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7043761Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7043990Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7044211Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7044447Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7044708Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7045108Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7045460Z return mod(**inputs) 2025-08-14T21:50:36.7045860Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7046299Z outputs = self.model( 2025-08-14T21:50:36.7046689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:50:36.7047127Z encoder_outputs = self.encoder( 2025-08-14T21:50:36.7047666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:50:36.7048139Z layer_outputs = encoder_layer( 2025-08-14T21:50:36.7048514Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7048989Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7049459Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:50:36.7049898Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:36.7050343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7050796Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7051283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:36.7051800Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:36.7052009Z 2025-08-14T21:50:36.7052125Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7052536Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7052899Z return mod(**inputs) 2025-08-14T21:50:36.7053317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7053739Z outputs = self.model( 2025-08-14T21:50:36.7054135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:50:36.7054560Z encoder_outputs = self.encoder( 2025-08-14T21:50:36.7054978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:50:36.7055399Z layer_outputs = encoder_layer( 2025-08-14T21:50:36.7055778Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7056167Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7056598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:50:36.7057039Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:36.7057470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7057918Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7058398Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:36.7058900Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:36.7059076Z 2025-08-14T21:50:36.7059171Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7059402Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7059750Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7060153Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7060511Z return mod(**inputs) 2025-08-14T21:50:36.7060920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7061339Z outputs = self.model( 2025-08-14T21:50:36.7061729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:50:36.7062155Z encoder_outputs = self.encoder( 2025-08-14T21:50:36.7062578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:50:36.7063035Z layer_outputs = encoder_layer( 2025-08-14T21:50:36.7063417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7063840Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7064292Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 332, in forward 2025-08-14T21:50:36.7064781Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:36.7065212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:50:36.7065583Z return self.act(input) 2025-08-14T21:50:36.7065702Z 2025-08-14T21:50:36.7065797Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7066019Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7066249Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7066473Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7066691Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7066917Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7067143Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7067357Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7067618Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7068015Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7068373Z return mod(**inputs) 2025-08-14T21:50:36.7068785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7069210Z outputs = self.model( 2025-08-14T21:50:36.7069609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:50:36.7070039Z encoder_outputs = self.encoder( 2025-08-14T21:50:36.7070467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:50:36.7070880Z layer_outputs = encoder_layer( 2025-08-14T21:50:36.7071251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7071627Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7072048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:50:36.7072478Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:36.7072904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7073329Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7073799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:36.7074310Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:36.7074501Z 2025-08-14T21:50:36.7074613Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7074990Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7075337Z return mod(**inputs) 2025-08-14T21:50:36.7075726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7076134Z outputs = self.model( 2025-08-14T21:50:36.7076534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:50:36.7076956Z encoder_outputs = self.encoder( 2025-08-14T21:50:36.7077351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:50:36.7077762Z layer_outputs = encoder_layer( 2025-08-14T21:50:36.7079105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7079546Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7079959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:50:36.7080451Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:36.7080910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7081367Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7081846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:36.7082336Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:36.7082509Z 2025-08-14T21:50:36.7082604Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7082825Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7083076Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7083466Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7083815Z return mod(**inputs) 2025-08-14T21:50:36.7084198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7084604Z outputs = self.model( 2025-08-14T21:50:36.7084988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:50:36.7085403Z encoder_outputs = self.encoder( 2025-08-14T21:50:36.7085807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:50:36.7086215Z layer_outputs = encoder_layer( 2025-08-14T21:50:36.7086585Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7086963Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7087377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 332, in forward 2025-08-14T21:50:36.7087836Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:36.7088249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:50:36.7088601Z return self.act(input) 2025-08-14T21:50:36.7088724Z 2025-08-14T21:50:36.7088811Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7089035Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7089249Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7089469Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7089689Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7089899Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7090118Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7090335Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7090582Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7090959Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7091304Z return mod(**inputs) 2025-08-14T21:50:36.7091701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7092102Z outputs = self.model( 2025-08-14T21:50:36.7092487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:50:36.7092907Z encoder_outputs = self.encoder( 2025-08-14T21:50:36.7093316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:50:36.7093749Z layer_outputs = encoder_layer( 2025-08-14T21:50:36.7094137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7094561Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7095002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:50:36.7095466Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:36.7095908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7096361Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7096836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:36.7097360Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:36.7097560Z 2025-08-14T21:50:36.7097682Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7098077Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7098430Z return mod(**inputs) 2025-08-14T21:50:36.7098819Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7099237Z outputs = self.model( 2025-08-14T21:50:36.7099730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:50:36.7100161Z encoder_outputs = self.encoder( 2025-08-14T21:50:36.7100692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:50:36.7101132Z layer_outputs = encoder_layer( 2025-08-14T21:50:36.7101525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7101921Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7102355Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:50:36.7102796Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:36.7103238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7103680Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7104166Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:36.7104668Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:36.7104843Z 2025-08-14T21:50:36.7104938Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7105160Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7105417Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7105810Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7106158Z return mod(**inputs) 2025-08-14T21:50:36.7106560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7106987Z outputs = self.model( 2025-08-14T21:50:36.7107384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:50:36.7107809Z encoder_outputs = self.encoder( 2025-08-14T21:50:36.7108223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:50:36.7108645Z layer_outputs = encoder_layer( 2025-08-14T21:50:36.7109050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7109447Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7109927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 332, in forward 2025-08-14T21:50:36.7110403Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:36.7110830Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:50:36.7111192Z return self.act(input) 2025-08-14T21:50:36.7111340Z 2025-08-14T21:50:36.7111426Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7111654Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7111876Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7112088Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7112306Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7112526Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7112740Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7112958Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7113210Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7113586Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7113941Z return mod(**inputs) 2025-08-14T21:50:36.7114328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7114731Z outputs = self.model( 2025-08-14T21:50:36.7115108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:50:36.7115521Z encoder_outputs = self.encoder( 2025-08-14T21:50:36.7115923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:50:36.7116329Z layer_outputs = encoder_layer( 2025-08-14T21:50:36.7116697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7117081Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7117490Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:50:36.7117908Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:36.7118334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7118769Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7119236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:36.7119741Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:36.7119940Z 2025-08-14T21:50:36.7120051Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7120432Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7120769Z return mod(**inputs) 2025-08-14T21:50:36.7121155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7121567Z outputs = self.model( 2025-08-14T21:50:36.7121962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:50:36.7122378Z encoder_outputs = self.encoder( 2025-08-14T21:50:36.7122783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:50:36.7123195Z layer_outputs = encoder_layer( 2025-08-14T21:50:36.7123571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7123974Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7124391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:50:36.7124841Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:36.7125277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7125728Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7126198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:36.7126689Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:36.7126857Z 2025-08-14T21:50:36.7126943Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7127167Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7127421Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7127796Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7128145Z return mod(**inputs) 2025-08-14T21:50:36.7128532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7128939Z outputs = self.model( 2025-08-14T21:50:36.7129316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:50:36.7129727Z encoder_outputs = self.encoder( 2025-08-14T21:50:36.7130128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:50:36.7130528Z layer_outputs = encoder_layer( 2025-08-14T21:50:36.7130898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7131283Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7131695Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 332, in forward 2025-08-14T21:50:36.7132145Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:36.7132558Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:50:36.7132919Z return self.act(input) 2025-08-14T21:50:36.7133034Z 2025-08-14T21:50:36.7133126Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7133344Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7133565Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7133783Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7133996Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7134213Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7134430Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7134640Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7134891Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7135275Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7135619Z return mod(**inputs) 2025-08-14T21:50:36.7136002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7136415Z outputs = self.model( 2025-08-14T21:50:36.7136806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:50:36.7137235Z encoder_outputs = self.encoder( 2025-08-14T21:50:36.7137654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:50:36.7138075Z layer_outputs = encoder_layer( 2025-08-14T21:50:36.7138481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7138872Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7139317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:50:36.7139875Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:36.7140333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7140792Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7141282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:36.7141954Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:36.7142169Z 2025-08-14T21:50:36.7142288Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7142691Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7143053Z return mod(**inputs) 2025-08-14T21:50:36.7143457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7143875Z outputs = self.model( 2025-08-14T21:50:36.7144278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:50:36.7144709Z encoder_outputs = self.encoder( 2025-08-14T21:50:36.7145119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:50:36.7145549Z layer_outputs = encoder_layer( 2025-08-14T21:50:36.7145932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7146341Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7146762Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:50:36.7147208Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:36.7147645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7148085Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7148568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:36.7149064Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:36.7149241Z 2025-08-14T21:50:36.7149335Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7149558Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7149815Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7150212Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7150568Z return mod(**inputs) 2025-08-14T21:50:36.7150961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7151380Z outputs = self.model( 2025-08-14T21:50:36.7151778Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:50:36.7152194Z encoder_outputs = self.encoder( 2025-08-14T21:50:36.7152605Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:50:36.7153013Z layer_outputs = encoder_layer( 2025-08-14T21:50:36.7153382Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7153758Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7154252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 332, in forward 2025-08-14T21:50:36.7154745Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:36.7155149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:50:36.7155541Z return self.act(input) 2025-08-14T21:50:36.7155665Z 2025-08-14T21:50:36.7155783Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7156011Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7156225Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7156443Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7156660Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7156871Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7157088Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7157306Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7157547Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7157937Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7158285Z return mod(**inputs) 2025-08-14T21:50:36.7158672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7159073Z outputs = self.model( 2025-08-14T21:50:36.7159461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:50:36.7159906Z encoder_outputs = self.encoder( 2025-08-14T21:50:36.7160303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:50:36.7160720Z layer_outputs = encoder_layer( 2025-08-14T21:50:36.7161093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7161491Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7161913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:50:36.7162343Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:36.7162777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7163212Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7163679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:36.7164188Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:36.7164380Z 2025-08-14T21:50:36.7164499Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7164874Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7165220Z return mod(**inputs) 2025-08-14T21:50:36.7165606Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7166026Z outputs = self.model( 2025-08-14T21:50:36.7166403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:50:36.7166827Z encoder_outputs = self.encoder( 2025-08-14T21:50:36.7167233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:50:36.7167633Z layer_outputs = encoder_layer( 2025-08-14T21:50:36.7168003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7168390Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7168835Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:50:36.7169259Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:36.7169708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7170168Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7170687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:36.7171157Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:36.7171326Z 2025-08-14T21:50:36.7171408Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7171623Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7171851Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7172213Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7172538Z return mod(**inputs) 2025-08-14T21:50:36.7172906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7173287Z outputs = self.model( 2025-08-14T21:50:36.7173653Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:50:36.7174047Z encoder_outputs = self.encoder( 2025-08-14T21:50:36.7174427Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:50:36.7174811Z layer_outputs = encoder_layer( 2025-08-14T21:50:36.7175161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7175523Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7175910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 332, in forward 2025-08-14T21:50:36.7176368Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:36.7176785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:50:36.7177147Z return self.act(input) 2025-08-14T21:50:36.7177263Z 2025-08-14T21:50:36.7177349Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7177575Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7177798Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7178013Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7178233Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7178453Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7178666Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7178890Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7179140Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7179620Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7180005Z return mod(**inputs) 2025-08-14T21:50:36.7180422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7180868Z outputs = self.model( 2025-08-14T21:50:36.7181276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:50:36.7181724Z encoder_outputs = self.encoder( 2025-08-14T21:50:36.7182132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:50:36.7182540Z layer_outputs = encoder_layer( 2025-08-14T21:50:36.7182911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7183297Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7183739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:50:36.7184189Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:36.7184618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7185071Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7185557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:36.7186074Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:36.7186277Z 2025-08-14T21:50:36.7186388Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7186778Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7187125Z return mod(**inputs) 2025-08-14T21:50:36.7187512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7187932Z outputs = self.model( 2025-08-14T21:50:36.7188324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:50:36.7188732Z encoder_outputs = self.encoder( 2025-08-14T21:50:36.7189139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:50:36.7189549Z layer_outputs = encoder_layer( 2025-08-14T21:50:36.7189921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7190298Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7190714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:50:36.7191143Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:36.7191566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7192008Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7192489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:36.7192977Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:36.7193145Z 2025-08-14T21:50:36.7193230Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7193455Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7193709Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7194083Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7194434Z return mod(**inputs) 2025-08-14T21:50:36.7194824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7195236Z outputs = self.model( 2025-08-14T21:50:36.7195617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:50:36.7196034Z encoder_outputs = self.encoder( 2025-08-14T21:50:36.7196447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:50:36.7196857Z layer_outputs = encoder_layer( 2025-08-14T21:50:36.7197224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7197608Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7198027Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 332, in forward 2025-08-14T21:50:36.7198508Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:36.7198924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:50:36.7199310Z return self.act(input) 2025-08-14T21:50:36.7199449Z 2025-08-14T21:50:36.7199541Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7199756Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7199994Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7200215Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7200427Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7200645Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7200861Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7201069Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7201320Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7201706Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7202051Z return mod(**inputs) 2025-08-14T21:50:36.7202435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7202845Z outputs = self.model( 2025-08-14T21:50:36.7203238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:50:36.7203660Z encoder_outputs = self.encoder( 2025-08-14T21:50:36.7204064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:50:36.7204474Z layer_outputs = encoder_layer( 2025-08-14T21:50:36.7204848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7205231Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7205644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:50:36.7206069Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:36.7206487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7206927Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7207412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:36.7207920Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:36.7208109Z 2025-08-14T21:50:36.7208218Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7208604Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7208929Z return mod(**inputs) 2025-08-14T21:50:36.7209295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7209669Z outputs = self.model( 2025-08-14T21:50:36.7210033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:50:36.7210427Z encoder_outputs = self.encoder( 2025-08-14T21:50:36.7210802Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:50:36.7211189Z layer_outputs = encoder_layer( 2025-08-14T21:50:36.7211540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7211902Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7212287Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:50:36.7212693Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:36.7213120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7213553Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7213990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:36.7214500Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:36.7214664Z 2025-08-14T21:50:36.7214756Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7214964Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7215206Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7215571Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7215904Z return mod(**inputs) 2025-08-14T21:50:36.7216280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7216687Z outputs = self.model( 2025-08-14T21:50:36.7217072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:50:36.7217480Z encoder_outputs = self.encoder( 2025-08-14T21:50:36.7217890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:50:36.7218301Z layer_outputs = encoder_layer( 2025-08-14T21:50:36.7218674Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7219052Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7219546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 332, in forward 2025-08-14T21:50:36.7220026Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:36.7220446Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:50:36.7220818Z return self.act(input) 2025-08-14T21:50:36.7220947Z 2025-08-14T21:50:36.7221036Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7221265Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7221466Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7221676Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7221885Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7222087Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7222298Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7222504Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7222730Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7223093Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7223418Z return mod(**inputs) 2025-08-14T21:50:36.7223785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7224165Z outputs = self.model( 2025-08-14T21:50:36.7224528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:50:36.7224922Z encoder_outputs = self.encoder( 2025-08-14T21:50:36.7225293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:50:36.7225689Z layer_outputs = encoder_layer( 2025-08-14T21:50:36.7226039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7226405Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7226789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:50:36.7227226Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:36.7227630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7228068Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7228547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:36.7229032Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:36.7229215Z 2025-08-14T21:50:36.7229329Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7229696Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7230017Z return mod(**inputs) 2025-08-14T21:50:36.7230385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7230771Z outputs = self.model( 2025-08-14T21:50:36.7231129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:50:36.7231519Z encoder_outputs = self.encoder( 2025-08-14T21:50:36.7231905Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:50:36.7232292Z layer_outputs = encoder_layer( 2025-08-14T21:50:36.7232634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7232995Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7233384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-08-14T21:50:36.7233784Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:50:36.7234190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7234602Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7235046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:36.7235499Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:36.7235668Z 2025-08-14T21:50:36.7235750Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7235964Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7236190Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7236557Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7236886Z return mod(**inputs) 2025-08-14T21:50:36.7237253Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7237631Z outputs = self.model( 2025-08-14T21:50:36.7237993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-08-14T21:50:36.7238390Z encoder_outputs = self.encoder( 2025-08-14T21:50:36.7238774Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-08-14T21:50:36.7239156Z layer_outputs = encoder_layer( 2025-08-14T21:50:36.7239503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7239862Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7240245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 332, in forward 2025-08-14T21:50:36.7240681Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:36.7241097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:50:36.7241440Z return self.act(input) 2025-08-14T21:50:36.7241606Z 2025-08-14T21:50:36.7241722Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7242087Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7242375Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7242579Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7242824Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7243041Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7243241Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7243454Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7243697Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7244071Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7244406Z return mod(**inputs) 2025-08-14T21:50:36.7244820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7245246Z outputs = self.model( 2025-08-14T21:50:36.7245638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7246037Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7246426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7246818Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7247165Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7247531Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7247925Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:36.7248336Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:36.7248776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7249214Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7249690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:36.7250208Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:36.7250415Z 2025-08-14T21:50:36.7250525Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7250914Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7251266Z return mod(**inputs) 2025-08-14T21:50:36.7251646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7252059Z outputs = self.model( 2025-08-14T21:50:36.7252450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7252859Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7253272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7253690Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7254059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7254435Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7254849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:36.7255295Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:36.7255733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7256195Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7256691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:36.7257197Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:36.7257366Z 2025-08-14T21:50:36.7257450Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7257694Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7257915Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7258134Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7258346Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7258835Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7259055Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7259271Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7259605Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7260020Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7260367Z return mod(**inputs) 2025-08-14T21:50:36.7260766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7261193Z outputs = self.model( 2025-08-14T21:50:36.7261592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7262007Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7262425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7262853Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7263231Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7263619Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7264033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:50:36.7264477Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:36.7264916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7265351Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7265828Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:36.7266339Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:36.7266532Z 2025-08-14T21:50:36.7266643Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7267025Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7267371Z return mod(**inputs) 2025-08-14T21:50:36.7267751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7268169Z outputs = self.model( 2025-08-14T21:50:36.7268565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7268983Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7269390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7269798Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7270174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7270566Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7271010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:50:36.7271461Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:36.7271931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7272369Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7272830Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:36.7273287Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:36.7273448Z 2025-08-14T21:50:36.7273537Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7273741Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7273976Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7274337Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7274657Z return mod(**inputs) 2025-08-14T21:50:36.7275022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7275405Z outputs = self.model( 2025-08-14T21:50:36.7275769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7276149Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7276531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7276918Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7277266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7277621Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7278007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:50:36.7278439Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:36.7278820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:50:36.7279162Z return self.act(input) 2025-08-14T21:50:36.7279275Z 2025-08-14T21:50:36.7279355Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7279567Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7279771Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7279979Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7280185Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7280383Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7280590Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7280795Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7281029Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7281416Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7281760Z return mod(**inputs) 2025-08-14T21:50:36.7282160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7282571Z outputs = self.model( 2025-08-14T21:50:36.7282961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7283353Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7283731Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7284124Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7284475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7284846Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7285254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:36.7285690Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:36.7286100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7286534Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7286996Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:36.7287476Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:36.7287659Z 2025-08-14T21:50:36.7287772Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7288126Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7288455Z return mod(**inputs) 2025-08-14T21:50:36.7288820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7289208Z outputs = self.model( 2025-08-14T21:50:36.7289566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7289955Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7290338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7290720Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7291069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7291434Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7291826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:36.7292239Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:36.7292672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7293108Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7293575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:36.7294048Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:36.7294229Z 2025-08-14T21:50:36.7294313Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7294538Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7294753Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7294972Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7295193Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7295406Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7295626Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7295845Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7296096Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7296475Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7296826Z return mod(**inputs) 2025-08-14T21:50:36.7297217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7297621Z outputs = self.model( 2025-08-14T21:50:36.7298009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7298432Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7298841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7299260Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7299771Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7300196Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7300625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:50:36.7301099Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:36.7301564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7301999Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7302460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:36.7302986Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:36.7303187Z 2025-08-14T21:50:36.7303300Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7303684Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7304029Z return mod(**inputs) 2025-08-14T21:50:36.7304418Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7304840Z outputs = self.model( 2025-08-14T21:50:36.7305262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7305690Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7306104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7306527Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7306935Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7307328Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7307742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:50:36.7308185Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:36.7308617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7309056Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7309529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:36.7310006Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:36.7310181Z 2025-08-14T21:50:36.7310264Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7310488Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7310737Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7311112Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7311460Z return mod(**inputs) 2025-08-14T21:50:36.7311854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7312236Z outputs = self.model( 2025-08-14T21:50:36.7312606Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7312996Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7313383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7313768Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7314118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7314517Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7314902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:50:36.7315382Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:36.7315794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:50:36.7316151Z return self.act(input) 2025-08-14T21:50:36.7316261Z 2025-08-14T21:50:36.7316340Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7316556Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7316766Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7316968Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7317176Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7317394Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7317600Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7317799Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7318036Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7318401Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7318722Z return mod(**inputs) 2025-08-14T21:50:36.7319088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7319476Z outputs = self.model( 2025-08-14T21:50:36.7319840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7320224Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7320609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7320999Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7321343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7321706Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7322096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:36.7322511Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:36.7322916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7323325Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7323766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:36.7324240Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:36.7324421Z 2025-08-14T21:50:36.7324525Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7324885Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7325207Z return mod(**inputs) 2025-08-14T21:50:36.7325560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7325943Z outputs = self.model( 2025-08-14T21:50:36.7326304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7326687Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7327059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7327444Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7327793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7328148Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7328560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:36.7329005Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:36.7329440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7329865Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7330310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:36.7330766Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:36.7330926Z 2025-08-14T21:50:36.7331014Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7331221Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7331430Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7331643Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7331848Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7332061Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7332270Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7332473Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7332711Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7333081Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7333410Z return mod(**inputs) 2025-08-14T21:50:36.7333773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7334159Z outputs = self.model( 2025-08-14T21:50:36.7334523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7334907Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7335299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7335689Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7336052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7336433Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7336852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:50:36.7337301Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:36.7337741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7338179Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7338651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:36.7339159Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:36.7339355Z 2025-08-14T21:50:36.7339534Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7339942Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7340305Z return mod(**inputs) 2025-08-14T21:50:36.7340704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7341113Z outputs = self.model( 2025-08-14T21:50:36.7341510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7342148Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7342555Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7343053Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7343435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7343854Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7344296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:50:36.7344775Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:36.7345230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7345667Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7346134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:36.7346620Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:36.7346798Z 2025-08-14T21:50:36.7346895Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7347120Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7347378Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7347787Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7348154Z return mod(**inputs) 2025-08-14T21:50:36.7348541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7348967Z outputs = self.model( 2025-08-14T21:50:36.7349355Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7349822Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7350252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7350694Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7351082Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7351483Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7351768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:50:36.7351899Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:36.7352131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:50:36.7352210Z return self.act(input) 2025-08-14T21:50:36.7352213Z 2025-08-14T21:50:36.7352298Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7352384Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7352463Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7352542Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7352631Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7352711Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7352794Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7352878Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7352990Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7353216Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7353291Z return mod(**inputs) 2025-08-14T21:50:36.7353571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7353650Z outputs = self.model( 2025-08-14T21:50:36.7353924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7354002Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7354307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7354386Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7354649Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7354759Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7355055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:36.7355175Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:36.7355450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7355557Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7355876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:36.7356016Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:36.7356020Z 2025-08-14T21:50:36.7356138Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7356353Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7356426Z return mod(**inputs) 2025-08-14T21:50:36.7356706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7356779Z outputs = self.model( 2025-08-14T21:50:36.7357057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7357135Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7357405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7357485Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7357735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7357821Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7358099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:36.7358208Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:36.7358485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7358588Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7358901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:36.7359018Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:36.7359022Z 2025-08-14T21:50:36.7359106Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7359195Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7359276Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7359361Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7359448Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7359528Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7359607Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7359693Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7359801Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7360019Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7360095Z return mod(**inputs) 2025-08-14T21:50:36.7360366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7360442Z outputs = self.model( 2025-08-14T21:50:36.7360741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7360838Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7361117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7361212Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7361467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7361559Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7361842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:50:36.7361962Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:36.7362230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7362332Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7362647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:36.7362785Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:36.7362791Z 2025-08-14T21:50:36.7362904Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7363121Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7363190Z return mod(**inputs) 2025-08-14T21:50:36.7363466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7363537Z outputs = self.model( 2025-08-14T21:50:36.7363806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7363888Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7364161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7364244Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7364482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7364565Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7364845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:50:36.7364959Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:36.7365247Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7365349Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7365659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:36.7365782Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:36.7365788Z 2025-08-14T21:50:36.7365873Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7365957Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7366077Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7366292Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7366365Z return mod(**inputs) 2025-08-14T21:50:36.7366639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7366710Z outputs = self.model( 2025-08-14T21:50:36.7366993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7367071Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7367374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7367472Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7367713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7367820Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7368105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:50:36.7368238Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:36.7368476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:50:36.7368552Z return self.act(input) 2025-08-14T21:50:36.7368556Z 2025-08-14T21:50:36.7368644Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7368727Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7368807Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7368894Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7368974Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7369053Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7369147Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7369227Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7369336Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7369555Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7369624Z return mod(**inputs) 2025-08-14T21:50:36.7369902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7369974Z outputs = self.model( 2025-08-14T21:50:36.7370248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7370332Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7370606Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7370683Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7370928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7371016Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7371300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:36.7371409Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:36.7371688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7371800Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7372118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:36.7372270Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:36.7372276Z 2025-08-14T21:50:36.7372388Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7372603Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7372678Z return mod(**inputs) 2025-08-14T21:50:36.7372954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7373026Z outputs = self.model( 2025-08-14T21:50:36.7373308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7373387Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7373690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7373788Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7374027Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7374152Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7374466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:36.7374582Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:36.7374851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7374956Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7375284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:36.7375401Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:36.7375406Z 2025-08-14T21:50:36.7375491Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7375581Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7375666Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7375754Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7375837Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7375917Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7376005Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7376085Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7376197Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7376420Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7376492Z return mod(**inputs) 2025-08-14T21:50:36.7376776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7376852Z outputs = self.model( 2025-08-14T21:50:36.7377134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7377225Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7377499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7377580Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7377826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7377913Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7378190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:50:36.7378307Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:36.7378579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7378692Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7379004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:36.7379143Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:36.7379154Z 2025-08-14T21:50:36.7379266Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7379568Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7379669Z return mod(**inputs) 2025-08-14T21:50:36.7379949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7380024Z outputs = self.model( 2025-08-14T21:50:36.7380530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7380634Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7380923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7381021Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7381284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7381384Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7381661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:50:36.7381781Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:36.7382067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7382173Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7382499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:36.7382618Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:36.7382623Z 2025-08-14T21:50:36.7382716Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7382811Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7382925Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7383152Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7383223Z return mod(**inputs) 2025-08-14T21:50:36.7383497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7383578Z outputs = self.model( 2025-08-14T21:50:36.7383857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7383938Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7384221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7384300Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7384550Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7384636Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7384909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:50:36.7385046Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:36.7385275Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:50:36.7385352Z return self.act(input) 2025-08-14T21:50:36.7385364Z 2025-08-14T21:50:36.7385451Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7385543Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7385634Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7385715Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7385794Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7385885Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7385965Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7386044Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7386163Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7386372Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7386447Z return mod(**inputs) 2025-08-14T21:50:36.7386735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7386808Z outputs = self.model( 2025-08-14T21:50:36.7387080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7387176Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7387460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7387571Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7387806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7387898Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7388168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:36.7388271Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:36.7388554Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7388656Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7388964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:36.7389110Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:36.7389116Z 2025-08-14T21:50:36.7389224Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7389439Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7389507Z return mod(**inputs) 2025-08-14T21:50:36.7389776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7389853Z outputs = self.model( 2025-08-14T21:50:36.7390124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7390210Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7390479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7390559Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7390806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7390889Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7391156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:36.7391265Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:36.7391531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7391640Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7391947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:36.7392060Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:36.7392066Z 2025-08-14T21:50:36.7392157Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7392238Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7392329Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7392409Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7392488Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7392575Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7392655Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7392732Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7392848Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7393086Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7393158Z return mod(**inputs) 2025-08-14T21:50:36.7393455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7393545Z outputs = self.model( 2025-08-14T21:50:36.7393837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7393919Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7394194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7394279Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7394512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7394595Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7394870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:50:36.7394985Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:36.7395257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7395359Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7395660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:36.7395804Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:36.7395808Z 2025-08-14T21:50:36.7395916Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7396133Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7396203Z return mod(**inputs) 2025-08-14T21:50:36.7396472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7396552Z outputs = self.model( 2025-08-14T21:50:36.7396819Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7396907Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7397178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7397256Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7397499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7397583Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7397846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:50:36.7397969Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:36.7398235Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7398348Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7398655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:36.7398767Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:36.7398770Z 2025-08-14T21:50:36.7398866Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7398948Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7399057Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7399273Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7399343Z return mod(**inputs) 2025-08-14T21:50:36.7399639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7399730Z outputs = self.model( 2025-08-14T21:50:36.7400002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7400110Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7400396Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7400482Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7400712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7400796Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7401067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:50:36.7401194Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:36.7401415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:50:36.7401500Z return self.act(input) 2025-08-14T21:50:36.7401506Z 2025-08-14T21:50:36.7401588Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7401675Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7401758Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7401839Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7401926Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7402005Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7402084Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7402171Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7402292Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7402491Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7402567Z return mod(**inputs) 2025-08-14T21:50:36.7402818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7402897Z outputs = self.model( 2025-08-14T21:50:36.7403162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7403241Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7403513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7403589Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7403825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7403914Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7404180Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:36.7404293Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:36.7404558Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7404659Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7404974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:36.7405111Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:36.7405115Z 2025-08-14T21:50:36.7405229Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7405438Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7405507Z return mod(**inputs) 2025-08-14T21:50:36.7405802Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7405874Z outputs = self.model( 2025-08-14T21:50:36.7406167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7406266Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7406550Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7406637Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7406867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7406950Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7407220Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:36.7407325Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:36.7407599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7407701Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7408004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:36.7408128Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:36.7408132Z 2025-08-14T21:50:36.7408215Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7408296Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7408385Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7408466Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7408553Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7408633Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7408713Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7408802Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7408911Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7409126Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7409206Z return mod(**inputs) 2025-08-14T21:50:36.7409472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7409553Z outputs = self.model( 2025-08-14T21:50:36.7409816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7409894Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7410164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7410241Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7410478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7410570Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7410837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:50:36.7410960Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:36.7411225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7411326Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7411638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:36.7411772Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:36.7411776Z 2025-08-14T21:50:36.7411893Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7412125Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7412210Z return mod(**inputs) 2025-08-14T21:50:36.7412491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7412595Z outputs = self.model( 2025-08-14T21:50:36.7412877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7412964Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7413227Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7413312Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7413541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7413625Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7413899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:50:36.7414014Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:36.7414284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7414387Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7414691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:36.7414809Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:36.7414813Z 2025-08-14T21:50:36.7414895Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7414977Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7415093Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7415304Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7415382Z return mod(**inputs) 2025-08-14T21:50:36.7415649Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7415722Z outputs = self.model( 2025-08-14T21:50:36.7415996Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7416075Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7416338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7416422Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7416654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7416744Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7417008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:50:36.7417135Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:36.7417368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:50:36.7417441Z return self.act(input) 2025-08-14T21:50:36.7417447Z 2025-08-14T21:50:36.7417536Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7417618Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7417699Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7417787Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7417868Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7417947Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7418034Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7418113Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7418240Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7418459Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7418553Z return mod(**inputs) 2025-08-14T21:50:36.7418847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7418918Z outputs = self.model( 2025-08-14T21:50:36.7419214Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7419300Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7419651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7419736Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7419993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7420079Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7420365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:36.7420477Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:36.7420755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7420870Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7421193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:36.7421342Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:36.7421346Z 2025-08-14T21:50:36.7421457Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7421670Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7421752Z return mod(**inputs) 2025-08-14T21:50:36.7422024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7422097Z outputs = self.model( 2025-08-14T21:50:36.7422376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7422454Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7422731Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7422809Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7423042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7423135Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7423404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:36.7423512Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:36.7423789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7423894Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7424215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:36.7424330Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:36.7424334Z 2025-08-14T21:50:36.7424417Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7424507Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7424589Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7424677Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7424781Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7424862Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7424969Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7425049Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7425181Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7425401Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7425488Z return mod(**inputs) 2025-08-14T21:50:36.7425758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7425838Z outputs = self.model( 2025-08-14T21:50:36.7426114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7426200Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7426472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7426547Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7426790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7426876Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7427148Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:50:36.7427262Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:36.7427525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7427631Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7427942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:36.7428081Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:36.7428092Z 2025-08-14T21:50:36.7428201Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7428409Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7428486Z return mod(**inputs) 2025-08-14T21:50:36.7428766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7428837Z outputs = self.model( 2025-08-14T21:50:36.7429130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7429207Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7429491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7429567Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7429807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7429900Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7430163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:50:36.7430276Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:36.7430564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7430664Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7430979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:36.7431089Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:36.7431093Z 2025-08-14T21:50:36.7431176Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7431289Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7431416Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7431634Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7431720Z return mod(**inputs) 2025-08-14T21:50:36.7432014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7432095Z outputs = self.model( 2025-08-14T21:50:36.7432369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7432446Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7432731Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7432807Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7433053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7433145Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7433393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:50:36.7433520Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:36.7433735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:50:36.7433807Z return self.act(input) 2025-08-14T21:50:36.7433811Z 2025-08-14T21:50:36.7433899Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7433979Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7434064Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7434143Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7434222Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7434309Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7434388Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7434469Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7434584Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7434793Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7434861Z return mod(**inputs) 2025-08-14T21:50:36.7435138Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7435211Z outputs = self.model( 2025-08-14T21:50:36.7435497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7435577Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7435858Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7435951Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7436186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7436280Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7436554Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:36.7436662Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:36.7436939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7437054Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7437352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:36.7437495Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:36.7437550Z 2025-08-14T21:50:36.7437662Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7437907Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7437995Z return mod(**inputs) 2025-08-14T21:50:36.7438266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7438364Z outputs = self.model( 2025-08-14T21:50:36.7438631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7438717Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7438988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7439067Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7439315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7439403Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7439684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:36.7439801Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:36.7440088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7440199Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7440503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:36.7440617Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:36.7440621Z 2025-08-14T21:50:36.7440716Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7440800Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7440890Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7440974Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7441057Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7441146Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7441230Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7441312Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7441440Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7441657Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7441727Z return mod(**inputs) 2025-08-14T21:50:36.7442168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7442249Z outputs = self.model( 2025-08-14T21:50:36.7442538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7442621Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7442898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7442990Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7443229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7443317Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7443603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:50:36.7443723Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:36.7444012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7444116Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7444524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:36.7444701Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:36.7444733Z 2025-08-14T21:50:36.7444846Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7445105Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7445181Z return mod(**inputs) 2025-08-14T21:50:36.7445460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7445543Z outputs = self.model( 2025-08-14T21:50:36.7445818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7445898Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7446183Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7446264Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7446512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7446601Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7446876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:50:36.7447004Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:36.7447279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7447392Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7447707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:36.7447819Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:36.7447825Z 2025-08-14T21:50:36.7447919Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7448001Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7448113Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7448337Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7448407Z return mod(**inputs) 2025-08-14T21:50:36.7448686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7448759Z outputs = self.model( 2025-08-14T21:50:36.7449030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7449119Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7449393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7449471Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7449719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7449805Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7450087Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:50:36.7450214Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:36.7450441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:50:36.7450524Z return self.act(input) 2025-08-14T21:50:36.7450527Z 2025-08-14T21:50:36.7450611Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7450700Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7450802Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7450886Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7450995Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7451077Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7451175Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7451263Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7451375Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7451609Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7451691Z return mod(**inputs) 2025-08-14T21:50:36.7451974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7452055Z outputs = self.model( 2025-08-14T21:50:36.7452333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7452415Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7452699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7452780Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7453020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7453118Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7453390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:36.7453506Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:36.7453778Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7453885Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7454211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:36.7454353Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:36.7454357Z 2025-08-14T21:50:36.7454477Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7454696Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7454772Z return mod(**inputs) 2025-08-14T21:50:36.7455057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7455129Z outputs = self.model( 2025-08-14T21:50:36.7455403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7455494Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7455768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7455856Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7456096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7456184Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7456469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:36.7456578Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:36.7456857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7456960Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7457272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:36.7457420Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:36.7457425Z 2025-08-14T21:50:36.7457511Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7457614Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7457705Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7457808Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7457898Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7458002Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7458086Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7458174Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7458286Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7458500Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7458581Z return mod(**inputs) 2025-08-14T21:50:36.7458855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7458939Z outputs = self.model( 2025-08-14T21:50:36.7459214Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7459297Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7459765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7459852Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7460093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7460190Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7460467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:50:36.7460591Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:36.7460866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7460971Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7461293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:36.7461432Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:36.7461437Z 2025-08-14T21:50:36.7461559Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7461772Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7461846Z return mod(**inputs) 2025-08-14T21:50:36.7462126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7462200Z outputs = self.model( 2025-08-14T21:50:36.7462473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7462564Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7463005Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7463105Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7463348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7463436Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7463724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:50:36.7463844Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:36.7464126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7464277Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7464592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:36.7464740Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:36.7464763Z 2025-08-14T21:50:36.7464852Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7464954Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7465080Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7465296Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7465378Z return mod(**inputs) 2025-08-14T21:50:36.7465649Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7465721Z outputs = self.model( 2025-08-14T21:50:36.7466004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7466087Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7466364Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7466453Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7466700Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7466792Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7467062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:50:36.7467191Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:36.7467427Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:50:36.7467502Z return self.act(input) 2025-08-14T21:50:36.7467508Z 2025-08-14T21:50:36.7467592Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7468014Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7468131Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7468222Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7468311Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7468393Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7468487Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7468569Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7468685Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7468915Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7468987Z return mod(**inputs) 2025-08-14T21:50:36.7469275Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7469354Z outputs = self.model( 2025-08-14T21:50:36.7469647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7469740Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7470032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7470112Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7470371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7470459Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7470743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:36.7470853Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:36.7471161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7471275Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7471604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:36.7471747Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:36.7471758Z 2025-08-14T21:50:36.7471872Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7472066Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7472140Z return mod(**inputs) 2025-08-14T21:50:36.7472387Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7472454Z outputs = self.model( 2025-08-14T21:50:36.7472711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7472782Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7473037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7473110Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7473324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7473409Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7473654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:36.7473750Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:36.7474007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7474101Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7474386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:36.7474492Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:36.7474497Z 2025-08-14T21:50:36.7474574Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7474658Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7474736Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7474816Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7474890Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7474963Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7475043Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7475116Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7475215Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7475415Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7475481Z return mod(**inputs) 2025-08-14T21:50:36.7475724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7475799Z outputs = self.model( 2025-08-14T21:50:36.7476045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7476125Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7476371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7476441Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7476662Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7476738Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7477000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:50:36.7477116Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:36.7477375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7477504Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7478194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:36.7478325Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:36.7478328Z 2025-08-14T21:50:36.7478438Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7478628Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7478700Z return mod(**inputs) 2025-08-14T21:50:36.7478948Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7479013Z outputs = self.model( 2025-08-14T21:50:36.7479267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7479344Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7479596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7479680Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7479898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7479987Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7480244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:50:36.7480351Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:36.7480612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7480709Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7481002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:36.7481109Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:36.7481112Z 2025-08-14T21:50:36.7481194Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7481279Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7481380Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7481577Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7481650Z return mod(**inputs) 2025-08-14T21:50:36.7481903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7481977Z outputs = self.model( 2025-08-14T21:50:36.7482236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7482313Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7482565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7482636Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7482848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7482934Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7483179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:50:36.7483303Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:36.7483528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:50:36.7483613Z return self.act(input) 2025-08-14T21:50:36.7483617Z 2025-08-14T21:50:36.7483702Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7483795Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7483876Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7483976Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7484052Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7484132Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7484206Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7484280Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7484388Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7484581Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7484645Z return mod(**inputs) 2025-08-14T21:50:36.7484906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7484974Z outputs = self.model( 2025-08-14T21:50:36.7485233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7485307Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7485553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7485630Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7485843Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7485919Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7486175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:36.7486273Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:36.7486526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7486623Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7486908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:36.7487040Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:36.7487044Z 2025-08-14T21:50:36.7487142Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7487341Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7487404Z return mod(**inputs) 2025-08-14T21:50:36.7487651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7487726Z outputs = self.model( 2025-08-14T21:50:36.7487975Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7488049Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7488304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7488376Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7488595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7488672Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7488914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-08-14T21:50:36.7489019Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:50:36.7489287Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7489409Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7489699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:36.7489820Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:36.7489839Z 2025-08-14T21:50:36.7489932Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7490011Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7490088Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7490183Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7490257Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7490339Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7490414Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7490487Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7490596Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7490789Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7490855Z return mod(**inputs) 2025-08-14T21:50:36.7491109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7491175Z outputs = self.model( 2025-08-14T21:50:36.7491433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7491506Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7491756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7491837Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7492057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7492138Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7492398Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:50:36.7492510Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:36.7492766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7492863Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7493156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:50:36.7493290Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:50:36.7493293Z 2025-08-14T21:50:36.7493393Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7493600Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7493666Z return mod(**inputs) 2025-08-14T21:50:36.7493919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7493994Z outputs = self.model( 2025-08-14T21:50:36.7494254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7494327Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7494587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7494660Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7494890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7494971Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7495242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-08-14T21:50:36.7495375Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:50:36.7495623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-08-14T21:50:36.7495737Z attn_output, attn_weights = attention_interface( 2025-08-14T21:50:36.7496053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:50:36.7496159Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:50:36.7496163Z 2025-08-14T21:50:36.7496250Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7496328Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7496428Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7496636Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7496701Z return mod(**inputs) 2025-08-14T21:50:36.7496964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-08-14T21:50:36.7497031Z outputs = self.model( 2025-08-14T21:50:36.7497282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-08-14T21:50:36.7497366Z decoder_outputs = self.decoder( 2025-08-14T21:50:36.7497619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-08-14T21:50:36.7497692Z layer_outputs = decoder_layer( 2025-08-14T21:50:36.7497921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:50:36.7498001Z return super().__call__(*args, **kwargs) 2025-08-14T21:50:36.7498259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 446, in forward 2025-08-14T21:50:36.7498378Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:50:36.7498589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:50:36.7498667Z return self.act(input) 2025-08-14T21:50:36.7498670Z 2025-08-14T21:50:36.7498750Z cudagraph partition due to non gpu ops 2025-08-14T21:50:36.7498853Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7499058Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7499123Z return mod(**inputs) 2025-08-14T21:50:36.7499384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1456, in forward 2025-08-14T21:50:36.7499596Z lm_logits = self.lm_head(outputs[0]) + self.final_logits_bias 2025-08-14T21:50:36.7499603Z 2025-08-14T21:50:36.7499715Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:50:36.7499932Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:50:36.7500000Z return mod(**inputs) 2025-08-14T21:50:36.7500263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1461, in forward 2025-08-14T21:50:36.7500434Z masked_lm_loss = loss_fct(lm_logits.view(-1, self.config.vocab_size), labels.view(-1)) 2025-08-14T21:50:36.7500438Z 2025-08-14T21:50:51.1522178Z Compilation time (from dynamo_timed): 33.184170255 2025-08-14T21:50:51.1657624Z pass 2025-08-14T21:50:51.1658364Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:50:51.1659903Z TIMING: _recursive_pre_grad_passes:0.09161 _recursive_joint_graph_passes:1.19456 _recursive_post_grad_passes:0.17938 async_compile.wait:0.81187 code_gen:12.40907 inductor_compile:16.38539 backend_compile:27.29297 gc:0.00035 entire_frame_compile:33.18417 total_wall_time:33.18417 2025-08-14T21:50:51.1661028Z STATS: call_* op count: 986 | FakeTensorMode.__torch_dispatch__:63787 | FakeTensor.__torch_dispatch__:9911 | ProxyTorchDispatchMode.__torch_dispatch__:14032 2025-08-14T21:50:51.1661660Z Dynamo produced 1 graphs covering 986 ops with 0 graph breaks (0 unique) 2025-08-14T21:50:57.4966093Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:50:57.4967141Z from pkg_resources import resource_filename 2025-08-14T21:50:58.2024803Z 2025-08-14T21:51:00.7900201Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:51:00.7900655Z loading model: 0it [00:02, ?it/s] 2025-08-14T21:51:00.7917339Z cpu eval MT5ForConditionalGeneration 2025-08-14T21:51:01.4563720Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:51:01.7511026Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:51:02.0217983Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:51:18.7261072Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7261810Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7262282Z return mod(**inputs) 2025-08-14T21:51:18.7262774Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7263320Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7268065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7271619Z layer_outputs = layer_module( 2025-08-14T21:51:18.7276870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7281046Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7286058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7290161Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7292345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7292830Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7293265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 421, in forward 2025-08-14T21:51:18.7293695Z position_bias = position_bias + causal_mask 2025-08-14T21:51:18.7293869Z 2025-08-14T21:51:18.7294000Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7294406Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7294771Z return mod(**inputs) 2025-08-14T21:51:18.7295219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7295689Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7296096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7296514Z layer_outputs = layer_module( 2025-08-14T21:51:18.7296899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7297319Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7297746Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7298509Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7299025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7299674Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7300110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:51:18.7300596Z query_states = self.q(hidden_states) 2025-08-14T21:51:18.7300759Z 2025-08-14T21:51:18.7300879Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7301279Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7301636Z return mod(**inputs) 2025-08-14T21:51:18.7302051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7302470Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7302878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7303316Z layer_outputs = layer_module( 2025-08-14T21:51:18.7303706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7304109Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7304510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7304941Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7305346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7305779Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7306182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:51:18.7306592Z key_states = self.k(current_states) 2025-08-14T21:51:18.7306738Z 2025-08-14T21:51:18.7306860Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7307256Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7307599Z return mod(**inputs) 2025-08-14T21:51:18.7307981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7308399Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7308794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7309205Z layer_outputs = layer_module( 2025-08-14T21:51:18.7309578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7309980Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7310376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7310787Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7311196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7311596Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7312000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:51:18.7312462Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:51:18.7312661Z 2025-08-14T21:51:18.7312788Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7313170Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7313506Z return mod(**inputs) 2025-08-14T21:51:18.7313922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7314354Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7314736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7315163Z layer_outputs = layer_module( 2025-08-14T21:51:18.7315559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7315947Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7316348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7316756Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7317162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7317551Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7317937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:51:18.7318346Z value_states = self.v(current_states) 2025-08-14T21:51:18.7318494Z 2025-08-14T21:51:18.7318613Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7318999Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7319354Z return mod(**inputs) 2025-08-14T21:51:18.7319733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7320128Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7320522Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7320919Z layer_outputs = layer_module( 2025-08-14T21:51:18.7321287Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7321670Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7322077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7322487Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7322881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7323285Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7323687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:51:18.7324131Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:51:18.7324307Z 2025-08-14T21:51:18.7324418Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7324806Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7325153Z return mod(**inputs) 2025-08-14T21:51:18.7325528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7325921Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7326322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7326716Z layer_outputs = layer_module( 2025-08-14T21:51:18.7327076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7327458Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7327860Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7328260Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7328667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7329092Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7329491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:51:18.7330298Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:51:18.7330491Z 2025-08-14T21:51:18.7330605Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7330993Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7331346Z return mod(**inputs) 2025-08-14T21:51:18.7331714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7332115Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7332513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7333011Z layer_outputs = layer_module( 2025-08-14T21:51:18.7333431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7333822Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7334225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7334624Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7335033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7335444Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7335850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:51:18.7336265Z attn_output = self.o(attn_output) 2025-08-14T21:51:18.7336430Z 2025-08-14T21:51:18.7336546Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7336946Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7337305Z return mod(**inputs) 2025-08-14T21:51:18.7337689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7338099Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7338497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7338891Z layer_outputs = layer_module( 2025-08-14T21:51:18.7339277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7339747Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7340172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7340585Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7340999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7341406Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7342017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:51:18.7342434Z value_states = self.v(current_states) 2025-08-14T21:51:18.7342588Z 2025-08-14T21:51:18.7342701Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7343092Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7343433Z return mod(**inputs) 2025-08-14T21:51:18.7343870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7344279Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7344706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7345172Z layer_outputs = layer_module( 2025-08-14T21:51:18.7345556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7345980Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7346396Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7346827Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7347241Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7347663Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7348071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:51:18.7348614Z query_states = self.q(hidden_states) 2025-08-14T21:51:18.7348809Z 2025-08-14T21:51:18.7348937Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7349342Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7349713Z return mod(**inputs) 2025-08-14T21:51:18.7350110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7350524Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7350934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7351360Z layer_outputs = layer_module( 2025-08-14T21:51:18.7351749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7352152Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7352572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7352991Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7353407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7353874Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7354300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:51:18.7354722Z key_states = self.k(current_states) 2025-08-14T21:51:18.7354867Z 2025-08-14T21:51:18.7354993Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7355384Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7355748Z return mod(**inputs) 2025-08-14T21:51:18.7356135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7356541Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7356951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7357373Z layer_outputs = layer_module( 2025-08-14T21:51:18.7357760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7358159Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7358571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7358988Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7359394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7359848Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7360286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:51:18.7360761Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:51:18.7360952Z 2025-08-14T21:51:18.7361062Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7361464Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7361817Z return mod(**inputs) 2025-08-14T21:51:18.7362198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7362605Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7363022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7363450Z layer_outputs = layer_module( 2025-08-14T21:51:18.7363825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7364228Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7364648Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7365073Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7365484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7365909Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7366338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:51:18.7366800Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:51:18.7366976Z 2025-08-14T21:51:18.7367087Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7367482Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7367846Z return mod(**inputs) 2025-08-14T21:51:18.7368228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7368645Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7369055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7369468Z layer_outputs = layer_module( 2025-08-14T21:51:18.7369844Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7370237Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7370647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7371047Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7371457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7371878Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7372295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:51:18.7372741Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:51:18.7372928Z 2025-08-14T21:51:18.7373042Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7373441Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7373800Z return mod(**inputs) 2025-08-14T21:51:18.7374177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7374599Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7375017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7375425Z layer_outputs = layer_module( 2025-08-14T21:51:18.7375796Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7376212Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7376652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7377065Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7377477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7377897Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7378303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:51:18.7378722Z attn_output = self.o(attn_output) 2025-08-14T21:51:18.7378872Z 2025-08-14T21:51:18.7378987Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7379379Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7379812Z return mod(**inputs) 2025-08-14T21:51:18.7380201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7380615Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7381019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7381441Z layer_outputs = layer_module( 2025-08-14T21:51:18.7381832Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7382239Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7382654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.7383082Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.7383509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.7383938Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.7384357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:51:18.7384773Z query_states = self.q(hidden_states) 2025-08-14T21:51:18.7384926Z 2025-08-14T21:51:18.7385053Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7385451Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7385825Z return mod(**inputs) 2025-08-14T21:51:18.7386231Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7386651Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7387052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7387468Z layer_outputs = layer_module( 2025-08-14T21:51:18.7387853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7388244Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7388658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7389093Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7389524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7390005Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7390476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 183, in forward 2025-08-14T21:51:18.7390952Z hidden_gelu = self.act(self.wi_0(hidden_states)) 2025-08-14T21:51:18.7391140Z 2025-08-14T21:51:18.7391264Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7391680Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7392034Z return mod(**inputs) 2025-08-14T21:51:18.7392411Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7392804Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7393205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7393609Z layer_outputs = layer_module( 2025-08-14T21:51:18.7393980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7394361Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7394758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7395174Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7395577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7396018Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7396452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 184, in forward 2025-08-14T21:51:18.7396855Z hidden_linear = self.wi_1(hidden_states) 2025-08-14T21:51:18.7396998Z 2025-08-14T21:51:18.7397110Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7397497Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7397842Z return mod(**inputs) 2025-08-14T21:51:18.7398216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7398612Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7399011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7399411Z layer_outputs = layer_module( 2025-08-14T21:51:18.7399772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7400155Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7400556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7400971Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7401380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7401823Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7402267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 185, in forward 2025-08-14T21:51:18.7402683Z hidden_states = hidden_gelu * hidden_linear 2025-08-14T21:51:18.7402857Z 2025-08-14T21:51:18.7403002Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7403393Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7403772Z return mod(**inputs) 2025-08-14T21:51:18.7404155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7404570Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7405001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7405458Z layer_outputs = layer_module( 2025-08-14T21:51:18.7405821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7406222Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7406639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7407043Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7407453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7407891Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7408323Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 198, in forward 2025-08-14T21:51:18.7408722Z hidden_states = self.wo(hidden_states) 2025-08-14T21:51:18.7408874Z 2025-08-14T21:51:18.7408989Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7409372Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7409723Z return mod(**inputs) 2025-08-14T21:51:18.7410090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7410491Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7410891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7411280Z layer_outputs = layer_module( 2025-08-14T21:51:18.7411701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7412091Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7412495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7412895Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7413305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7413727Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7414133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:51:18.7414553Z query_states = self.q(hidden_states) 2025-08-14T21:51:18.7414703Z 2025-08-14T21:51:18.7414813Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7415200Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7415547Z return mod(**inputs) 2025-08-14T21:51:18.7415932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7416345Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7416746Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7417159Z layer_outputs = layer_module( 2025-08-14T21:51:18.7417537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7417952Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7418377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7418821Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7419257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7419835Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7420297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:51:18.7420753Z key_states = self.k(current_states) 2025-08-14T21:51:18.7420904Z 2025-08-14T21:51:18.7421056Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7421449Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7421827Z return mod(**inputs) 2025-08-14T21:51:18.7422218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7422636Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7423036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7423446Z layer_outputs = layer_module( 2025-08-14T21:51:18.7423831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7424221Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7424634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7425055Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7425476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7425880Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7426307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:51:18.7426779Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:51:18.7426977Z 2025-08-14T21:51:18.7427096Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7427485Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7427842Z return mod(**inputs) 2025-08-14T21:51:18.7428231Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7428636Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7429037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7429448Z layer_outputs = layer_module( 2025-08-14T21:51:18.7429823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7430216Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7430628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7431045Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7431453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7431870Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7432288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:51:18.7432708Z value_states = self.v(current_states) 2025-08-14T21:51:18.7432856Z 2025-08-14T21:51:18.7432970Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7433363Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7433714Z return mod(**inputs) 2025-08-14T21:51:18.7434098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7434489Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7434918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7435316Z layer_outputs = layer_module( 2025-08-14T21:51:18.7435696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7436080Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7436496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7436920Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7437318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7437725Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7438124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:51:18.7438569Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:51:18.7438752Z 2025-08-14T21:51:18.7438864Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7439250Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7439598Z return mod(**inputs) 2025-08-14T21:51:18.7439967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7440372Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7440767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7441163Z layer_outputs = layer_module( 2025-08-14T21:51:18.7441527Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7442138Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7442553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7442957Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7443372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7443784Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7444198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:51:18.7444631Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:51:18.7444814Z 2025-08-14T21:51:18.7444926Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7445311Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7445650Z return mod(**inputs) 2025-08-14T21:51:18.7446025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7446431Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7446828Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7447225Z layer_outputs = layer_module( 2025-08-14T21:51:18.7447600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7447988Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7448405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7448804Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7449204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7449613Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7450080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:51:18.7450482Z attn_output = self.o(attn_output) 2025-08-14T21:51:18.7450665Z 2025-08-14T21:51:18.7450779Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7451195Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7451530Z return mod(**inputs) 2025-08-14T21:51:18.7451937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7452337Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7452720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7453118Z layer_outputs = layer_module( 2025-08-14T21:51:18.7453483Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7453867Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7454256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7454672Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7455088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7455532Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7455987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 183, in forward 2025-08-14T21:51:18.7456412Z hidden_gelu = self.act(self.wi_0(hidden_states)) 2025-08-14T21:51:18.7456573Z 2025-08-14T21:51:18.7456692Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7457072Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7457417Z return mod(**inputs) 2025-08-14T21:51:18.7457791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7458193Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7458582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7458981Z layer_outputs = layer_module( 2025-08-14T21:51:18.7459354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7459813Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7460226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7460662Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7461089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7461530Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7461995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 184, in forward 2025-08-14T21:51:18.7462427Z hidden_linear = self.wi_1(hidden_states) 2025-08-14T21:51:18.7462577Z 2025-08-14T21:51:18.7462700Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7463088Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7463451Z return mod(**inputs) 2025-08-14T21:51:18.7463835Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7464240Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7464645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7465107Z layer_outputs = layer_module( 2025-08-14T21:51:18.7465488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7465900Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7466390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7466847Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7467263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7467724Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7468167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 185, in forward 2025-08-14T21:51:18.7468556Z hidden_states = hidden_gelu * hidden_linear 2025-08-14T21:51:18.7468700Z 2025-08-14T21:51:18.7468805Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7469165Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7469519Z return mod(**inputs) 2025-08-14T21:51:18.7469887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7470259Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7470632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7471002Z layer_outputs = layer_module( 2025-08-14T21:51:18.7471341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7471703Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7472080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7472474Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7472853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7473271Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7473687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 198, in forward 2025-08-14T21:51:18.7474071Z hidden_states = self.wo(hidden_states) 2025-08-14T21:51:18.7474207Z 2025-08-14T21:51:18.7474313Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7474676Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7475004Z return mod(**inputs) 2025-08-14T21:51:18.7475346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7475726Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7476105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7476482Z layer_outputs = layer_module( 2025-08-14T21:51:18.7476825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7477189Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7477566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7477941Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7478326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7478714Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7479128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:51:18.7479518Z query_states = self.q(hidden_states) 2025-08-14T21:51:18.7479691Z 2025-08-14T21:51:18.7479796Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7480159Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7480494Z return mod(**inputs) 2025-08-14T21:51:18.7480897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7481279Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7481651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7482022Z layer_outputs = layer_module( 2025-08-14T21:51:18.7482374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7482746Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7483133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7483519Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7483913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7484307Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7484687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:51:18.7485074Z key_states = self.k(current_states) 2025-08-14T21:51:18.7485219Z 2025-08-14T21:51:18.7485323Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7485690Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7486017Z return mod(**inputs) 2025-08-14T21:51:18.7486374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7486763Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7487132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7487516Z layer_outputs = layer_module( 2025-08-14T21:51:18.7487865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7488232Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7488609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7488997Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7489385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7489780Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7490159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:51:18.7490599Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:51:18.7490786Z 2025-08-14T21:51:18.7490898Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7491256Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7491591Z return mod(**inputs) 2025-08-14T21:51:18.7491950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7492338Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7492706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7493089Z layer_outputs = layer_module( 2025-08-14T21:51:18.7493476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7493849Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7494231Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7494628Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7495026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7495403Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7495783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:51:18.7496161Z value_states = self.v(current_states) 2025-08-14T21:51:18.7496296Z 2025-08-14T21:51:18.7496407Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7496761Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7497086Z return mod(**inputs) 2025-08-14T21:51:18.7497438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7497807Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7498178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7498551Z layer_outputs = layer_module( 2025-08-14T21:51:18.7498898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7499312Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7499789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7500219Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7500636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7501069Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7501493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:51:18.7502011Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:51:18.7502192Z 2025-08-14T21:51:18.7502305Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7502708Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7503072Z return mod(**inputs) 2025-08-14T21:51:18.7503457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7503861Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7504276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7504696Z layer_outputs = layer_module( 2025-08-14T21:51:18.7505072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7505474Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7505886Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7506303Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7506714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7507136Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7507548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:51:18.7508035Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:51:18.7508224Z 2025-08-14T21:51:18.7508337Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7508841Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7509220Z return mod(**inputs) 2025-08-14T21:51:18.7509635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7510074Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7510484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7510906Z layer_outputs = layer_module( 2025-08-14T21:51:18.7511281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7511685Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7512103Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7512519Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7512938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7513360Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7513774Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:51:18.7514180Z attn_output = self.o(attn_output) 2025-08-14T21:51:18.7514331Z 2025-08-14T21:51:18.7514445Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7514841Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7515214Z return mod(**inputs) 2025-08-14T21:51:18.7515590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7515992Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7516389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7516779Z layer_outputs = layer_module( 2025-08-14T21:51:18.7517145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7517530Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7517921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7518340Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7518754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7519197Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7519633Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 183, in forward 2025-08-14T21:51:18.7520062Z hidden_gelu = self.act(self.wi_0(hidden_states)) 2025-08-14T21:51:18.7520233Z 2025-08-14T21:51:18.7520346Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7520732Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7521072Z return mod(**inputs) 2025-08-14T21:51:18.7521451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7521850Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7522235Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7522632Z layer_outputs = layer_module( 2025-08-14T21:51:18.7523025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7523410Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7523818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7524250Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7524679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7525130Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7525559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 184, in forward 2025-08-14T21:51:18.7525975Z hidden_linear = self.wi_1(hidden_states) 2025-08-14T21:51:18.7526120Z 2025-08-14T21:51:18.7526247Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7526609Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7526944Z return mod(**inputs) 2025-08-14T21:51:18.7527308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7527704Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7528077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7528465Z layer_outputs = layer_module( 2025-08-14T21:51:18.7528821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7529185Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7529572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7529975Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7530373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7530793Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7531213Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 185, in forward 2025-08-14T21:51:18.7531614Z hidden_states = hidden_gelu * hidden_linear 2025-08-14T21:51:18.7531758Z 2025-08-14T21:51:18.7531872Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7532241Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7532589Z return mod(**inputs) 2025-08-14T21:51:18.7532970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7533370Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7533777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7534187Z layer_outputs = layer_module( 2025-08-14T21:51:18.7534560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7534946Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7535354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7535780Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7536193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7536647Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7537092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 198, in forward 2025-08-14T21:51:18.7537507Z hidden_states = self.wo(hidden_states) 2025-08-14T21:51:18.7537689Z 2025-08-14T21:51:18.7537803Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7538233Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7538593Z return mod(**inputs) 2025-08-14T21:51:18.7538968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7539372Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7539863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7540278Z layer_outputs = layer_module( 2025-08-14T21:51:18.7540645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7541026Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7541413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7542006Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7542398Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7542789Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7543174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:51:18.7543556Z query_states = self.q(hidden_states) 2025-08-14T21:51:18.7543704Z 2025-08-14T21:51:18.7543817Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7544206Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7544553Z return mod(**inputs) 2025-08-14T21:51:18.7544920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7545324Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7545719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7546151Z layer_outputs = layer_module( 2025-08-14T21:51:18.7546512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7546898Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7547296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7547700Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7548108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7548517Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7548926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:51:18.7549321Z key_states = self.k(current_states) 2025-08-14T21:51:18.7549472Z 2025-08-14T21:51:18.7549584Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7549972Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7550312Z return mod(**inputs) 2025-08-14T21:51:18.7550691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7551091Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7551487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7551896Z layer_outputs = layer_module( 2025-08-14T21:51:18.7552265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7552725Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7553141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7553579Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7554016Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7554458Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7554845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:51:18.7555334Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:51:18.7555532Z 2025-08-14T21:51:18.7555643Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7556025Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7556344Z return mod(**inputs) 2025-08-14T21:51:18.7556693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7557072Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7557434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7557811Z layer_outputs = layer_module( 2025-08-14T21:51:18.7558157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7558519Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7558888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7559271Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7559650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7560034Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7560424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:51:18.7560829Z value_states = self.v(current_states) 2025-08-14T21:51:18.7560972Z 2025-08-14T21:51:18.7561089Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7561462Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7561787Z return mod(**inputs) 2025-08-14T21:51:18.7562140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7562518Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7562882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7563258Z layer_outputs = layer_module( 2025-08-14T21:51:18.7563604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7563979Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7564378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7564786Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7565184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7565581Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7565976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:51:18.7566386Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:51:18.7566550Z 2025-08-14T21:51:18.7566664Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7567062Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7567427Z return mod(**inputs) 2025-08-14T21:51:18.7567807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7568275Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7568687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7569083Z layer_outputs = layer_module( 2025-08-14T21:51:18.7569447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7569825Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7570234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7570642Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7571035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7571441Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7571841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:51:18.7572275Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:51:18.7572446Z 2025-08-14T21:51:18.7572559Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7572940Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7573283Z return mod(**inputs) 2025-08-14T21:51:18.7573649Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7574061Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7574456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7574855Z layer_outputs = layer_module( 2025-08-14T21:51:18.7575213Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7575596Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7575993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7576393Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7576781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7577183Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7577580Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:51:18.7577973Z attn_output = self.o(attn_output) 2025-08-14T21:51:18.7578118Z 2025-08-14T21:51:18.7578228Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7578609Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7578953Z return mod(**inputs) 2025-08-14T21:51:18.7579316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7579869Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7580270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7580660Z layer_outputs = layer_module( 2025-08-14T21:51:18.7581030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7581419Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7581852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7582271Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7582675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 485, in forward 2025-08-14T21:51:18.7583163Z hidden_states = hidden_states + self.dropout(attention_output[0]) 2025-08-14T21:51:18.7583377Z 2025-08-14T21:51:18.7583497Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7583873Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7584218Z return mod(**inputs) 2025-08-14T21:51:18.7584597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7584966Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7585341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7585721Z layer_outputs = layer_module( 2025-08-14T21:51:18.7586077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7586435Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7586814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7587208Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7587590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7588009Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7588422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 183, in forward 2025-08-14T21:51:18.7588824Z hidden_gelu = self.act(self.wi_0(hidden_states)) 2025-08-14T21:51:18.7588975Z 2025-08-14T21:51:18.7589081Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7589442Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7589766Z return mod(**inputs) 2025-08-14T21:51:18.7590124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7590493Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7590863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7591241Z layer_outputs = layer_module( 2025-08-14T21:51:18.7591581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7591947Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7592322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7592715Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7593106Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7593544Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7593977Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 184, in forward 2025-08-14T21:51:18.7594377Z hidden_linear = self.wi_1(hidden_states) 2025-08-14T21:51:18.7594513Z 2025-08-14T21:51:18.7594617Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7594978Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7595302Z return mod(**inputs) 2025-08-14T21:51:18.7595687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7596102Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7596499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7596914Z layer_outputs = layer_module( 2025-08-14T21:51:18.7597293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7597678Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7598072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7598452Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7598839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7599253Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7599692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 185, in forward 2025-08-14T21:51:18.7600095Z hidden_states = hidden_gelu * hidden_linear 2025-08-14T21:51:18.7600255Z 2025-08-14T21:51:18.7600367Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7600751Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7601098Z return mod(**inputs) 2025-08-14T21:51:18.7601463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7601864Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7602259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7602647Z layer_outputs = layer_module( 2025-08-14T21:51:18.7603013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7603397Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7603794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7604201Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7604614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7605054Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7605482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 198, in forward 2025-08-14T21:51:18.7605885Z hidden_states = self.wo(hidden_states) 2025-08-14T21:51:18.7606037Z 2025-08-14T21:51:18.7606147Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7606529Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7606867Z return mod(**inputs) 2025-08-14T21:51:18.7607244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7607645Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7608039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7608429Z layer_outputs = layer_module( 2025-08-14T21:51:18.7608794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7609175Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7609563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7609968Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7610394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7610815Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7611207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:51:18.7611629Z query_states = self.q(hidden_states) 2025-08-14T21:51:18.7611788Z 2025-08-14T21:51:18.7611917Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7612273Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7612599Z return mod(**inputs) 2025-08-14T21:51:18.7612956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7613336Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7613706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7614085Z layer_outputs = layer_module( 2025-08-14T21:51:18.7614435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7614790Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7615184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7615589Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7615988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7616391Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7616772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:51:18.7617157Z key_states = self.k(current_states) 2025-08-14T21:51:18.7617292Z 2025-08-14T21:51:18.7617404Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7617768Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7618094Z return mod(**inputs) 2025-08-14T21:51:18.7618450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7618823Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7619199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7619672Z layer_outputs = layer_module( 2025-08-14T21:51:18.7620052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7620443Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7620855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7621276Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7621670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7622080Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7622477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:51:18.7622912Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:51:18.7623095Z 2025-08-14T21:51:18.7623201Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7623576Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7623900Z return mod(**inputs) 2025-08-14T21:51:18.7624276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7624650Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7625057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7625459Z layer_outputs = layer_module( 2025-08-14T21:51:18.7625805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7626192Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7626594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7627011Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7627413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7627824Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7628221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:51:18.7628626Z value_states = self.v(current_states) 2025-08-14T21:51:18.7628779Z 2025-08-14T21:51:18.7628894Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7629285Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7629638Z return mod(**inputs) 2025-08-14T21:51:18.7630014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7630432Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7630836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7631244Z layer_outputs = layer_module( 2025-08-14T21:51:18.7631609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7632001Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7632408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7632815Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7633226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7633643Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7634065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:51:18.7634509Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:51:18.7634695Z 2025-08-14T21:51:18.7634808Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7635199Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7635544Z return mod(**inputs) 2025-08-14T21:51:18.7635927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7636338Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7654633Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7655110Z layer_outputs = layer_module( 2025-08-14T21:51:18.7655497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7655900Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7656319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7656727Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7657331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7657756Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7658271Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:51:18.7658762Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:51:18.7658952Z 2025-08-14T21:51:18.7659116Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7659610Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7659982Z return mod(**inputs) 2025-08-14T21:51:18.7660361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7660776Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7661179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7661583Z layer_outputs = layer_module( 2025-08-14T21:51:18.7661964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7662359Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7662769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7663175Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7663583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7663990Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7664384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:51:18.7664790Z attn_output = self.o(attn_output) 2025-08-14T21:51:18.7664938Z 2025-08-14T21:51:18.7665057Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7665443Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7665793Z return mod(**inputs) 2025-08-14T21:51:18.7666176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7666580Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7666969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7667340Z layer_outputs = layer_module( 2025-08-14T21:51:18.7667693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7668059Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7668429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7668831Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7669234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7669664Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7670076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 183, in forward 2025-08-14T21:51:18.7670486Z hidden_gelu = self.act(self.wi_0(hidden_states)) 2025-08-14T21:51:18.7670642Z 2025-08-14T21:51:18.7670758Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7671114Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7671444Z return mod(**inputs) 2025-08-14T21:51:18.7671803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7672210Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7672579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7672982Z layer_outputs = layer_module( 2025-08-14T21:51:18.7673352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7673717Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7674110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7674508Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7674899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7675314Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7675732Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 184, in forward 2025-08-14T21:51:18.7676122Z hidden_linear = self.wi_1(hidden_states) 2025-08-14T21:51:18.7676262Z 2025-08-14T21:51:18.7676376Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7676735Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7677069Z return mod(**inputs) 2025-08-14T21:51:18.7677428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7677800Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7678182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7678564Z layer_outputs = layer_module( 2025-08-14T21:51:18.7678914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7679272Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7679655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7680050Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7680451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7680866Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7681283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 185, in forward 2025-08-14T21:51:18.7681678Z hidden_states = hidden_gelu * hidden_linear 2025-08-14T21:51:18.7681824Z 2025-08-14T21:51:18.7681931Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7682297Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7682628Z return mod(**inputs) 2025-08-14T21:51:18.7682982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7683357Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7683733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7684118Z layer_outputs = layer_module( 2025-08-14T21:51:18.7684467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7684834Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7685214Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7685605Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7686007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7686428Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7686854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 198, in forward 2025-08-14T21:51:18.7687256Z hidden_states = self.wo(hidden_states) 2025-08-14T21:51:18.7687392Z 2025-08-14T21:51:18.7687497Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7687875Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7688216Z return mod(**inputs) 2025-08-14T21:51:18.7688572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7688963Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7689350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7689744Z layer_outputs = layer_module( 2025-08-14T21:51:18.7690101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7690481Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7690877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7691275Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7691669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7692071Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7692460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:51:18.7692847Z query_states = self.q(hidden_states) 2025-08-14T21:51:18.7692997Z 2025-08-14T21:51:18.7693108Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7693488Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7693827Z return mod(**inputs) 2025-08-14T21:51:18.7694185Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7694583Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7694971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7695352Z layer_outputs = layer_module( 2025-08-14T21:51:18.7695715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7696095Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7696482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7696874Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7697266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7697662Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7698048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:51:18.7698442Z key_states = self.k(current_states) 2025-08-14T21:51:18.7698588Z 2025-08-14T21:51:18.7698700Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7699070Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7699412Z return mod(**inputs) 2025-08-14T21:51:18.7699878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7700286Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7700734Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7701127Z layer_outputs = layer_module( 2025-08-14T21:51:18.7701480Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7701864Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7702263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7702651Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7703037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7703433Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7703810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:51:18.7704252Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:51:18.7704425Z 2025-08-14T21:51:18.7704534Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7704872Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7705184Z return mod(**inputs) 2025-08-14T21:51:18.7705524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7705889Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7706237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7706608Z layer_outputs = layer_module( 2025-08-14T21:51:18.7706825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7706915Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7707149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7707237Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7707470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7707551Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7707827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:51:18.7707902Z value_states = self.v(current_states) 2025-08-14T21:51:18.7707905Z 2025-08-14T21:51:18.7708014Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7708215Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7708282Z return mod(**inputs) 2025-08-14T21:51:18.7708543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7708620Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7708861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7708945Z layer_outputs = layer_module( 2025-08-14T21:51:18.7709171Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7709259Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7709499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7709581Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7709825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7709917Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7710168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:51:18.7710306Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:51:18.7710325Z 2025-08-14T21:51:18.7710431Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7710649Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7710719Z return mod(**inputs) 2025-08-14T21:51:18.7710954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7711042Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7711278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7711357Z layer_outputs = layer_module( 2025-08-14T21:51:18.7711581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7711663Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7711904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7711986Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7712222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7712312Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7712544Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:51:18.7712664Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:51:18.7712668Z 2025-08-14T21:51:18.7712770Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7712964Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7713038Z return mod(**inputs) 2025-08-14T21:51:18.7713275Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7713358Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7713593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7713665Z layer_outputs = layer_module( 2025-08-14T21:51:18.7713888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7713967Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7714197Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7714285Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7714517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7714604Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7714834Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:51:18.7714911Z attn_output = self.o(attn_output) 2025-08-14T21:51:18.7714915Z 2025-08-14T21:51:18.7715027Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7715221Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7715294Z return mod(**inputs) 2025-08-14T21:51:18.7715529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7715604Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7715868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7715940Z layer_outputs = layer_module( 2025-08-14T21:51:18.7716171Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7716307Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7716556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7716644Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7716877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 485, in forward 2025-08-14T21:51:18.7717008Z hidden_states = hidden_states + self.dropout(attention_output[0]) 2025-08-14T21:51:18.7717011Z 2025-08-14T21:51:18.7717123Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7717320Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7717393Z return mod(**inputs) 2025-08-14T21:51:18.7717630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7717705Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7717949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7718021Z layer_outputs = layer_module( 2025-08-14T21:51:18.7718246Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7718332Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7718570Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7718664Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7718900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7719015Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7719255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 183, in forward 2025-08-14T21:51:18.7719358Z hidden_gelu = self.act(self.wi_0(hidden_states)) 2025-08-14T21:51:18.7719361Z 2025-08-14T21:51:18.7719470Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7719664Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7719729Z return mod(**inputs) 2025-08-14T21:51:18.7719973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7720045Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7720287Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7720364Z layer_outputs = layer_module( 2025-08-14T21:51:18.7720580Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7720668Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7720928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7721020Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7721273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7721391Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7721633Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 184, in forward 2025-08-14T21:51:18.7721722Z hidden_linear = self.wi_1(hidden_states) 2025-08-14T21:51:18.7721725Z 2025-08-14T21:51:18.7721846Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7722078Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7722142Z return mod(**inputs) 2025-08-14T21:51:18.7722392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7722498Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7722735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7722813Z layer_outputs = layer_module( 2025-08-14T21:51:18.7723027Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7723105Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7723350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7723437Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7723676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7723798Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7724031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 185, in forward 2025-08-14T21:51:18.7724126Z hidden_states = hidden_gelu * hidden_linear 2025-08-14T21:51:18.7724129Z 2025-08-14T21:51:18.7724230Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7724422Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7724493Z return mod(**inputs) 2025-08-14T21:51:18.7724728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7724808Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7725043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7725115Z layer_outputs = layer_module( 2025-08-14T21:51:18.7725336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7725416Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7725647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7725740Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7725973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7726089Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7726324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 198, in forward 2025-08-14T21:51:18.7726402Z hidden_states = self.wo(hidden_states) 2025-08-14T21:51:18.7726405Z 2025-08-14T21:51:18.7726513Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7726706Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7726781Z return mod(**inputs) 2025-08-14T21:51:18.7727014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7727087Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7727327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7727396Z layer_outputs = layer_module( 2025-08-14T21:51:18.7727625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7727710Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7727961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7728064Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7728314Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7728398Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7728638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:51:18.7728713Z query_states = self.q(hidden_states) 2025-08-14T21:51:18.7728716Z 2025-08-14T21:51:18.7728816Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7729015Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7729081Z return mod(**inputs) 2025-08-14T21:51:18.7729323Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7729397Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7729632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7729712Z layer_outputs = layer_module( 2025-08-14T21:51:18.7729928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7730014Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7730246Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7730326Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7730566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7730645Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7730878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:51:18.7730965Z key_states = self.k(current_states) 2025-08-14T21:51:18.7730969Z 2025-08-14T21:51:18.7731068Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7731270Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7731335Z return mod(**inputs) 2025-08-14T21:51:18.7731566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7731645Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7731879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7731950Z layer_outputs = layer_module( 2025-08-14T21:51:18.7732169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7732245Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7732483Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7732561Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7732792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7732878Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7733106Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:51:18.7733238Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:51:18.7733242Z 2025-08-14T21:51:18.7733360Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7733551Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7733646Z return mod(**inputs) 2025-08-14T21:51:18.7733882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7733970Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7734240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7734312Z layer_outputs = layer_module( 2025-08-14T21:51:18.7734532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7734611Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7734844Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7734932Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7735165Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7735255Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7735493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:51:18.7735572Z value_states = self.v(current_states) 2025-08-14T21:51:18.7735575Z 2025-08-14T21:51:18.7735685Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7735885Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7735953Z return mod(**inputs) 2025-08-14T21:51:18.7736200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7736272Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7736521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7736594Z layer_outputs = layer_module( 2025-08-14T21:51:18.7736819Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7736909Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7737156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7737236Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7737486Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7737568Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7737817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:51:18.7737933Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:51:18.7737938Z 2025-08-14T21:51:18.7738044Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7738260Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7738332Z return mod(**inputs) 2025-08-14T21:51:18.7738591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7738668Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7738915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7738995Z layer_outputs = layer_module( 2025-08-14T21:51:18.7739218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7739301Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7739738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7739843Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7740102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7740213Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7740482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:51:18.7740609Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:51:18.7740613Z 2025-08-14T21:51:18.7740726Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7740947Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7741020Z return mod(**inputs) 2025-08-14T21:51:18.7741279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7741366Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7741609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7741684Z layer_outputs = layer_module( 2025-08-14T21:51:18.7742124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7742210Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7742459Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7742541Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7742781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7742874Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7743116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:51:18.7743196Z attn_output = self.o(attn_output) 2025-08-14T21:51:18.7743210Z 2025-08-14T21:51:18.7743315Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7743517Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7743594Z return mod(**inputs) 2025-08-14T21:51:18.7743838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7743914Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7744166Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7744239Z layer_outputs = layer_module( 2025-08-14T21:51:18.7744473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7744555Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7744798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7744901Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7745146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7745265Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7745515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 183, in forward 2025-08-14T21:51:18.7745615Z hidden_gelu = self.act(self.wi_0(hidden_states)) 2025-08-14T21:51:18.7745618Z 2025-08-14T21:51:18.7745731Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7746009Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7746105Z return mod(**inputs) 2025-08-14T21:51:18.7746357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7746478Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7746760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7746836Z layer_outputs = layer_module( 2025-08-14T21:51:18.7747070Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7747164Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7747406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7747501Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7747775Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7747896Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7748145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 184, in forward 2025-08-14T21:51:18.7748230Z hidden_linear = self.wi_1(hidden_states) 2025-08-14T21:51:18.7748234Z 2025-08-14T21:51:18.7748342Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7748554Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7748624Z return mod(**inputs) 2025-08-14T21:51:18.7748924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7749002Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7749248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7749335Z layer_outputs = layer_module( 2025-08-14T21:51:18.7749560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7749648Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7749899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7749992Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7750243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7750359Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7750603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 185, in forward 2025-08-14T21:51:18.7750706Z hidden_states = hidden_gelu * hidden_linear 2025-08-14T21:51:18.7750709Z 2025-08-14T21:51:18.7750818Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7751030Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7751101Z return mod(**inputs) 2025-08-14T21:51:18.7751350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7751435Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7751681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7751756Z layer_outputs = layer_module( 2025-08-14T21:51:18.7751990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7752074Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7752343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7752448Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7752684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7752822Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7753077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 198, in forward 2025-08-14T21:51:18.7753158Z hidden_states = self.wo(hidden_states) 2025-08-14T21:51:18.7753170Z 2025-08-14T21:51:18.7753272Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7753470Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7753543Z return mod(**inputs) 2025-08-14T21:51:18.7753785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7753861Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7754108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7754180Z layer_outputs = layer_module( 2025-08-14T21:51:18.7754409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7754489Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7754725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7754813Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7755048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7755129Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7755386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:51:18.7755462Z query_states = self.q(hidden_states) 2025-08-14T21:51:18.7755465Z 2025-08-14T21:51:18.7755574Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7755768Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7755832Z return mod(**inputs) 2025-08-14T21:51:18.7756071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7756142Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7756377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7756454Z layer_outputs = layer_module( 2025-08-14T21:51:18.7756672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7756757Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7756987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7757067Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7757305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7757384Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7757623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:51:18.7757699Z key_states = self.k(current_states) 2025-08-14T21:51:18.7757702Z 2025-08-14T21:51:18.7757801Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7758948Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7759021Z return mod(**inputs) 2025-08-14T21:51:18.7759299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7759392Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7759630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7759725Z layer_outputs = layer_module( 2025-08-14T21:51:18.7759941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7760017Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7760258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7760336Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7760579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7760659Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7760891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:51:18.7761028Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:51:18.7761032Z 2025-08-14T21:51:18.7761135Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7761336Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7761400Z return mod(**inputs) 2025-08-14T21:51:18.7761637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7761716Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7761953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7762033Z layer_outputs = layer_module( 2025-08-14T21:51:18.7762255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7762334Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7762573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7762650Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7762880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7762967Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7763197Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:51:18.7763271Z value_states = self.v(current_states) 2025-08-14T21:51:18.7763282Z 2025-08-14T21:51:18.7763384Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7763579Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7763650Z return mod(**inputs) 2025-08-14T21:51:18.7763887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7763959Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7764202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7764272Z layer_outputs = layer_module( 2025-08-14T21:51:18.7764492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7764569Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7764820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7764907Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7765153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7765248Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7765502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:51:18.7765612Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:51:18.7765615Z 2025-08-14T21:51:18.7765725Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7765916Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7765980Z return mod(**inputs) 2025-08-14T21:51:18.7766220Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7766292Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7766533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7766607Z layer_outputs = layer_module( 2025-08-14T21:51:18.7766823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7766910Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7767141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7767219Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7767466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7767544Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7767779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:51:18.7767885Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:51:18.7767890Z 2025-08-14T21:51:18.7767990Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7768187Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7768251Z return mod(**inputs) 2025-08-14T21:51:18.7768479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7768555Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7768808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7768877Z layer_outputs = layer_module( 2025-08-14T21:51:18.7769094Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7769182Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7769404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7769489Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7769714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7769791Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7770022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:51:18.7770096Z attn_output = self.o(attn_output) 2025-08-14T21:51:18.7770099Z 2025-08-14T21:51:18.7770204Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7770391Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7770454Z return mod(**inputs) 2025-08-14T21:51:18.7770703Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7770801Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7771036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7771130Z layer_outputs = layer_module( 2025-08-14T21:51:18.7771354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7771439Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7771667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7771744Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7771974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 485, in forward 2025-08-14T21:51:18.7772101Z hidden_states = hidden_states + self.dropout(attention_output[0]) 2025-08-14T21:51:18.7772106Z 2025-08-14T21:51:18.7772203Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7772403Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7772467Z return mod(**inputs) 2025-08-14T21:51:18.7772702Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7772772Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7772997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7773074Z layer_outputs = layer_module( 2025-08-14T21:51:18.7773281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7773367Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7773590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7773678Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7773910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7774021Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7774246Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 183, in forward 2025-08-14T21:51:18.7774348Z hidden_gelu = self.act(self.wi_0(hidden_states)) 2025-08-14T21:51:18.7774351Z 2025-08-14T21:51:18.7774449Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7774645Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7774708Z return mod(**inputs) 2025-08-14T21:51:18.7774946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7775027Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7775260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7775338Z layer_outputs = layer_module( 2025-08-14T21:51:18.7775551Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7775628Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7775866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7775953Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7776182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7776318Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7776565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 184, in forward 2025-08-14T21:51:18.7776669Z hidden_linear = self.wi_1(hidden_states) 2025-08-14T21:51:18.7776673Z 2025-08-14T21:51:18.7776772Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7776988Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7777064Z return mod(**inputs) 2025-08-14T21:51:18.7777298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7777369Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7777614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7777686Z layer_outputs = layer_module( 2025-08-14T21:51:18.7777917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7777999Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7778236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7778334Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7778572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7778694Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7778930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 185, in forward 2025-08-14T21:51:18.7779020Z hidden_states = hidden_gelu * hidden_linear 2025-08-14T21:51:18.7779024Z 2025-08-14T21:51:18.7779138Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7779334Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7779402Z return mod(**inputs) 2025-08-14T21:51:18.7779743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-08-14T21:51:18.7779828Z encoder_outputs = self.encoder( 2025-08-14T21:51:18.7780080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7780156Z layer_outputs = layer_module( 2025-08-14T21:51:18.7780388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7780487Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7780737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7780842Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7781095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7781219Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7781489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 198, in forward 2025-08-14T21:51:18.7781576Z hidden_states = self.wo(hidden_states) 2025-08-14T21:51:18.7781580Z 2025-08-14T21:51:18.7781686Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7781893Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7781962Z return mod(**inputs) 2025-08-14T21:51:18.7782215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7782292Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7782560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7782659Z layer_outputs = layer_module( 2025-08-14T21:51:18.7782889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7782984Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7783240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.7783321Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.7783563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.7783647Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.7783880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:51:18.7783966Z key_states = self.k(current_states) 2025-08-14T21:51:18.7783971Z 2025-08-14T21:51:18.7784073Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7784276Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7784343Z return mod(**inputs) 2025-08-14T21:51:18.7784581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7784665Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7784900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7784972Z layer_outputs = layer_module( 2025-08-14T21:51:18.7785199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7785279Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7785531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.7785616Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.7785857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.7785953Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.7786201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:51:18.7786351Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:51:18.7786355Z 2025-08-14T21:51:18.7786456Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7786649Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7786724Z return mod(**inputs) 2025-08-14T21:51:18.7786964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7787039Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7787290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7787365Z layer_outputs = layer_module( 2025-08-14T21:51:18.7787588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7787666Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7787899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.7787989Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.7788223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.7788332Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.7788577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:51:18.7788671Z value_states = self.v(current_states) 2025-08-14T21:51:18.7788689Z 2025-08-14T21:51:18.7788801Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7789008Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7789074Z return mod(**inputs) 2025-08-14T21:51:18.7789313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7789384Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7789627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7789697Z layer_outputs = layer_module( 2025-08-14T21:51:18.7789914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7789998Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7790228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.7790309Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.7790551Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.7790633Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.7790873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:51:18.7790978Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:51:18.7790981Z 2025-08-14T21:51:18.7791084Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7791289Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7791356Z return mod(**inputs) 2025-08-14T21:51:18.7791600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7791678Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7791924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7792003Z layer_outputs = layer_module( 2025-08-14T21:51:18.7792219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7792296Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7792537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.7792616Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.7792856Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.7792939Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.7793178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:51:18.7793295Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:51:18.7793300Z 2025-08-14T21:51:18.7793402Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7793604Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7793669Z return mod(**inputs) 2025-08-14T21:51:18.7793903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7793985Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7794253Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7794347Z layer_outputs = layer_module( 2025-08-14T21:51:18.7794578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7794675Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7794934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.7795015Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.7795246Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.7795335Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.7795571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:51:18.7795648Z attn_output = self.o(attn_output) 2025-08-14T21:51:18.7795660Z 2025-08-14T21:51:18.7795762Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7795955Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7796033Z return mod(**inputs) 2025-08-14T21:51:18.7796264Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7796338Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7796580Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7796651Z layer_outputs = layer_module( 2025-08-14T21:51:18.7796871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7796948Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7797183Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7797279Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7797510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7797622Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7797858Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 183, in forward 2025-08-14T21:51:18.7797955Z hidden_gelu = self.act(self.wi_0(hidden_states)) 2025-08-14T21:51:18.7797958Z 2025-08-14T21:51:18.7798067Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7798259Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7798322Z return mod(**inputs) 2025-08-14T21:51:18.7798569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7798640Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7798882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7798957Z layer_outputs = layer_module( 2025-08-14T21:51:18.7799172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7799256Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7799490Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7799577Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7799817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7799928Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7800186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 184, in forward 2025-08-14T21:51:18.7800279Z hidden_linear = self.wi_1(hidden_states) 2025-08-14T21:51:18.7800283Z 2025-08-14T21:51:18.7800403Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7800623Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7800690Z return mod(**inputs) 2025-08-14T21:51:18.7800924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7801004Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7801240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7801321Z layer_outputs = layer_module( 2025-08-14T21:51:18.7801545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7801625Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7801874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7801966Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7802216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7802332Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7802570Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 185, in forward 2025-08-14T21:51:18.7802666Z hidden_states = hidden_gelu * hidden_linear 2025-08-14T21:51:18.7802670Z 2025-08-14T21:51:18.7802773Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7802972Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7803048Z return mod(**inputs) 2025-08-14T21:51:18.7803296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7803381Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7803624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7803697Z layer_outputs = layer_module( 2025-08-14T21:51:18.7803926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7804006Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7804250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7804339Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7804576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7804698Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7804937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 198, in forward 2025-08-14T21:51:18.7805018Z hidden_states = self.wo(hidden_states) 2025-08-14T21:51:18.7805023Z 2025-08-14T21:51:18.7805135Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7805333Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7805407Z return mod(**inputs) 2025-08-14T21:51:18.7805649Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7805724Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7805994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7806068Z layer_outputs = layer_module( 2025-08-14T21:51:18.7806305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7806411Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7806667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7806761Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7807000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7807083Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7807329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:51:18.7807407Z query_states = self.q(hidden_states) 2025-08-14T21:51:18.7807412Z 2025-08-14T21:51:18.7807520Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7807718Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7807787Z return mod(**inputs) 2025-08-14T21:51:18.7808033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7808108Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7808353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7808435Z layer_outputs = layer_module( 2025-08-14T21:51:18.7808651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7808736Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7808978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7809060Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7809307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7809391Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7809637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:51:18.7809715Z key_states = self.k(current_states) 2025-08-14T21:51:18.7809719Z 2025-08-14T21:51:18.7809822Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7810027Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7810093Z return mod(**inputs) 2025-08-14T21:51:18.7810338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7810421Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7810661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7810745Z layer_outputs = layer_module( 2025-08-14T21:51:18.7810963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7811044Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7811289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7811370Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7811608Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7811696Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7811956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:51:18.7812098Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:51:18.7812149Z 2025-08-14T21:51:18.7812253Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7812475Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7812574Z return mod(**inputs) 2025-08-14T21:51:18.7812814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7812894Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7813130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7813202Z layer_outputs = layer_module( 2025-08-14T21:51:18.7813428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7813507Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7813743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7813832Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7814069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7814158Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7814397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:51:18.7814475Z value_states = self.v(current_states) 2025-08-14T21:51:18.7814479Z 2025-08-14T21:51:18.7814588Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7814784Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7814860Z return mod(**inputs) 2025-08-14T21:51:18.7815097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7815172Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7815425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7815496Z layer_outputs = layer_module( 2025-08-14T21:51:18.7815715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7815803Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7816037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7816123Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7816357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7816437Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7816679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:51:18.7816787Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:51:18.7816791Z 2025-08-14T21:51:18.7816900Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7817098Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7817162Z return mod(**inputs) 2025-08-14T21:51:18.7817407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7817478Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7817717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7817824Z layer_outputs = layer_module( 2025-08-14T21:51:18.7818044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7818148Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7818400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7818494Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7818737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7818816Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7819051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:51:18.7819165Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:51:18.7819169Z 2025-08-14T21:51:18.7819273Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7819557Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7819637Z return mod(**inputs) 2025-08-14T21:51:18.7819897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7819988Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7820245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7820329Z layer_outputs = layer_module( 2025-08-14T21:51:18.7820563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7820648Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7820909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7821002Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7821239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7821331Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7821572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:51:18.7821662Z attn_output = self.o(attn_output) 2025-08-14T21:51:18.7821666Z 2025-08-14T21:51:18.7821772Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7821967Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7822044Z return mod(**inputs) 2025-08-14T21:51:18.7822285Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7822360Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7822607Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7822681Z layer_outputs = layer_module( 2025-08-14T21:51:18.7822908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7822989Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7823228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.7823318Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.7823555Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.7823649Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.7823894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:51:18.7824008Z query_states = self.q(hidden_states) 2025-08-14T21:51:18.7824012Z 2025-08-14T21:51:18.7824143Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7824342Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7824427Z return mod(**inputs) 2025-08-14T21:51:18.7824697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7824773Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7825021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7825093Z layer_outputs = layer_module( 2025-08-14T21:51:18.7825310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7825398Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7825634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.7825717Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.7825959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.7826044Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.7826289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:51:18.7826366Z key_states = self.k(current_states) 2025-08-14T21:51:18.7826370Z 2025-08-14T21:51:18.7826475Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7826686Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7826750Z return mod(**inputs) 2025-08-14T21:51:18.7827003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7827077Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7827317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7827398Z layer_outputs = layer_module( 2025-08-14T21:51:18.7827628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7827705Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7827944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.7828021Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.7828260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.7828339Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.7828570Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:51:18.7828704Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:51:18.7828709Z 2025-08-14T21:51:18.7828808Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7829011Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7829074Z return mod(**inputs) 2025-08-14T21:51:18.7829309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7829388Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7829619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7829688Z layer_outputs = layer_module( 2025-08-14T21:51:18.7829939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7830030Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7830269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.7830364Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.7830615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.7830704Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.7830936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:51:18.7831012Z value_states = self.v(current_states) 2025-08-14T21:51:18.7831023Z 2025-08-14T21:51:18.7831123Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7831316Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7831386Z return mod(**inputs) 2025-08-14T21:51:18.7831623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7831696Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7831936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7832005Z layer_outputs = layer_module( 2025-08-14T21:51:18.7832224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7832299Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7832530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.7832616Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.7832848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.7832929Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.7833169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:51:18.7833273Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:51:18.7833276Z 2025-08-14T21:51:18.7833383Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7833573Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7833636Z return mod(**inputs) 2025-08-14T21:51:18.7833874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7833947Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7834187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7834256Z layer_outputs = layer_module( 2025-08-14T21:51:18.7834468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7834554Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7834789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.7834867Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.7835104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.7835186Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.7835429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:51:18.7835536Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:51:18.7835563Z 2025-08-14T21:51:18.7835671Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7835894Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7835991Z return mod(**inputs) 2025-08-14T21:51:18.7836240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7836331Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7836570Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7836649Z layer_outputs = layer_module( 2025-08-14T21:51:18.7836868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7836947Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7837191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.7837284Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.7837523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.7837605Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.7837837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:51:18.7837922Z attn_output = self.o(attn_output) 2025-08-14T21:51:18.7837926Z 2025-08-14T21:51:18.7838025Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7838220Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7838292Z return mod(**inputs) 2025-08-14T21:51:18.7838526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7838609Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7838841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7838913Z layer_outputs = layer_module( 2025-08-14T21:51:18.7839138Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7839217Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7839455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7839547Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7839778Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7839903Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7840134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 183, in forward 2025-08-14T21:51:18.7840233Z hidden_gelu = self.act(self.wi_0(hidden_states)) 2025-08-14T21:51:18.7840245Z 2025-08-14T21:51:18.7840345Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7840539Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7840613Z return mod(**inputs) 2025-08-14T21:51:18.7840846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7840919Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7841158Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7841229Z layer_outputs = layer_module( 2025-08-14T21:51:18.7841477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7841556Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7841978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7842150Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7842423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7842548Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7842806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 184, in forward 2025-08-14T21:51:18.7842890Z hidden_linear = self.wi_1(hidden_states) 2025-08-14T21:51:18.7842894Z 2025-08-14T21:51:18.7843011Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7843355Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7843426Z return mod(**inputs) 2025-08-14T21:51:18.7843698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7843775Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7844015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7844100Z layer_outputs = layer_module( 2025-08-14T21:51:18.7844316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7844402Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7844636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7844726Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7844978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7845092Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7845341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 185, in forward 2025-08-14T21:51:18.7845432Z hidden_states = hidden_gelu * hidden_linear 2025-08-14T21:51:18.7845436Z 2025-08-14T21:51:18.7845540Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7845749Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7845814Z return mod(**inputs) 2025-08-14T21:51:18.7846051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7846133Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7846374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7846453Z layer_outputs = layer_module( 2025-08-14T21:51:18.7846673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7846754Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7846996Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7847084Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7847325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7847438Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7847670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 198, in forward 2025-08-14T21:51:18.7847756Z hidden_states = self.wo(hidden_states) 2025-08-14T21:51:18.7847760Z 2025-08-14T21:51:18.7847897Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7848120Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7848194Z return mod(**inputs) 2025-08-14T21:51:18.7848449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7848544Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7848795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7848867Z layer_outputs = layer_module( 2025-08-14T21:51:18.7849091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7849171Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7849406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7849496Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7849741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7849831Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7850068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:51:18.7850147Z query_states = self.q(hidden_states) 2025-08-14T21:51:18.7850150Z 2025-08-14T21:51:18.7850261Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7850460Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7850533Z return mod(**inputs) 2025-08-14T21:51:18.7850773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7850850Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7851095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7851171Z layer_outputs = layer_module( 2025-08-14T21:51:18.7851388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7851476Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7851711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7851800Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7852037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7852118Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7852362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:51:18.7852442Z key_states = self.k(current_states) 2025-08-14T21:51:18.7852447Z 2025-08-14T21:51:18.7852557Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7852755Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7852820Z return mod(**inputs) 2025-08-14T21:51:18.7853072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7853144Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7853383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7853465Z layer_outputs = layer_module( 2025-08-14T21:51:18.7853683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7853791Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7854035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7854137Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7854398Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7854494Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7854733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:51:18.7854869Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:51:18.7854873Z 2025-08-14T21:51:18.7854976Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7855184Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7855250Z return mod(**inputs) 2025-08-14T21:51:18.7855497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7855581Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7855828Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7855908Z layer_outputs = layer_module( 2025-08-14T21:51:18.7856129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7856209Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7856455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7856535Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7856777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7856867Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7857109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:51:18.7857196Z value_states = self.v(current_states) 2025-08-14T21:51:18.7857200Z 2025-08-14T21:51:18.7857306Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7857503Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7857576Z return mod(**inputs) 2025-08-14T21:51:18.7857824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7857904Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7858146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7858217Z layer_outputs = layer_module( 2025-08-14T21:51:18.7858455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7858540Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7858798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7858893Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7859152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7859245Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7859564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:51:18.7859690Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:51:18.7859693Z 2025-08-14T21:51:18.7859815Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7860066Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7860164Z return mod(**inputs) 2025-08-14T21:51:18.7860430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7860528Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7860803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7860877Z layer_outputs = layer_module( 2025-08-14T21:51:18.7861144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7861238Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7861492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7861588Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7861839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7861927Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7862187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:51:18.7862299Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:51:18.7862304Z 2025-08-14T21:51:18.7862410Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7862625Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7862693Z return mod(**inputs) 2025-08-14T21:51:18.7862957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7863032Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7863289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7863374Z layer_outputs = layer_module( 2025-08-14T21:51:18.7863605Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7863697Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7863950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7864034Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7864296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7864380Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7864631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:51:18.7864721Z attn_output = self.o(attn_output) 2025-08-14T21:51:18.7864724Z 2025-08-14T21:51:18.7864832Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7865052Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7865122Z return mod(**inputs) 2025-08-14T21:51:18.7865384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7865470Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7865738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7865814Z layer_outputs = layer_module( 2025-08-14T21:51:18.7866114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7866206Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7866499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7866604Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7866870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 485, in forward 2025-08-14T21:51:18.7867043Z hidden_states = hidden_states + self.dropout(attention_output[0]) 2025-08-14T21:51:18.7867061Z 2025-08-14T21:51:18.7867176Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7867412Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7867484Z return mod(**inputs) 2025-08-14T21:51:18.7867748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7867837Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7868108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7868187Z layer_outputs = layer_module( 2025-08-14T21:51:18.7868442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7868530Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7868811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.7868897Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.7869152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.7869249Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.7869502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:51:18.7869593Z query_states = self.q(hidden_states) 2025-08-14T21:51:18.7869598Z 2025-08-14T21:51:18.7869707Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7869918Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7869997Z return mod(**inputs) 2025-08-14T21:51:18.7870253Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7870331Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7870592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7870667Z layer_outputs = layer_module( 2025-08-14T21:51:18.7870905Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7870988Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7871243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.7871336Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.7871591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.7871681Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.7871945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:51:18.7872026Z key_states = self.k(current_states) 2025-08-14T21:51:18.7872030Z 2025-08-14T21:51:18.7872145Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7872354Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7872422Z return mod(**inputs) 2025-08-14T21:51:18.7872686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7872804Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7873069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7873162Z layer_outputs = layer_module( 2025-08-14T21:51:18.7873411Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7873517Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7873760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.7873839Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.7874080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.7874161Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.7874402Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:51:18.7874532Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:51:18.7874536Z 2025-08-14T21:51:18.7874635Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7874834Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7874898Z return mod(**inputs) 2025-08-14T21:51:18.7875141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7875212Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7875459Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7875545Z layer_outputs = layer_module( 2025-08-14T21:51:18.7875787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7875870Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7876151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.7876238Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.7876514Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.7876605Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.7876874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:51:18.7876966Z value_states = self.v(current_states) 2025-08-14T21:51:18.7876970Z 2025-08-14T21:51:18.7877081Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7877308Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7877388Z return mod(**inputs) 2025-08-14T21:51:18.7877650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7877738Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7878003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7878078Z layer_outputs = layer_module( 2025-08-14T21:51:18.7878325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7878407Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7878681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.7878767Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.7879055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.7879153Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.7879442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:51:18.7879577Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:51:18.7879581Z 2025-08-14T21:51:18.7880051Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7880249Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7880320Z return mod(**inputs) 2025-08-14T21:51:18.7880554Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7880629Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7880871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7880943Z layer_outputs = layer_module( 2025-08-14T21:51:18.7881165Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7881249Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7881501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.7881599Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.7881849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.7881938Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.7882201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:51:18.7882316Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:51:18.7882320Z 2025-08-14T21:51:18.7882438Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7882649Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7882722Z return mod(**inputs) 2025-08-14T21:51:18.7882987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7883072Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7883316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7883386Z layer_outputs = layer_module( 2025-08-14T21:51:18.7883598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7883682Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7883912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.7883992Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.7884231Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.7884314Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.7884553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:51:18.7884630Z attn_output = self.o(attn_output) 2025-08-14T21:51:18.7884633Z 2025-08-14T21:51:18.7884734Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7884944Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7885006Z return mod(**inputs) 2025-08-14T21:51:18.7885235Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7885314Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7885574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7885667Z layer_outputs = layer_module( 2025-08-14T21:51:18.7885879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7885973Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7886241Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7886332Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7886572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7886685Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7886919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 183, in forward 2025-08-14T21:51:18.7887023Z hidden_gelu = self.act(self.wi_0(hidden_states)) 2025-08-14T21:51:18.7887028Z 2025-08-14T21:51:18.7887126Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7887321Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7887393Z return mod(**inputs) 2025-08-14T21:51:18.7887629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7887712Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7887944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7888014Z layer_outputs = layer_module( 2025-08-14T21:51:18.7888234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7888313Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7888549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7888639Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7888877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7888996Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7889226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 184, in forward 2025-08-14T21:51:18.7889303Z hidden_linear = self.wi_1(hidden_states) 2025-08-14T21:51:18.7889314Z 2025-08-14T21:51:18.7889412Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7889604Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7889673Z return mod(**inputs) 2025-08-14T21:51:18.7889909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7889982Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7890223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7890294Z layer_outputs = layer_module( 2025-08-14T21:51:18.7890517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7890593Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7890823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7890918Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7891149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7891277Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7891532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 185, in forward 2025-08-14T21:51:18.7891636Z hidden_states = hidden_gelu * hidden_linear 2025-08-14T21:51:18.7891640Z 2025-08-14T21:51:18.7891744Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7891951Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7892016Z return mod(**inputs) 2025-08-14T21:51:18.7892259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7892332Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7892564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7892642Z layer_outputs = layer_module( 2025-08-14T21:51:18.7892856Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7892943Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7893175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7893264Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7893503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7893614Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7893852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 198, in forward 2025-08-14T21:51:18.7893929Z hidden_states = self.wo(hidden_states) 2025-08-14T21:51:18.7893933Z 2025-08-14T21:51:18.7894034Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7894233Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7894298Z return mod(**inputs) 2025-08-14T21:51:18.7894532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7894614Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7894852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7894929Z layer_outputs = layer_module( 2025-08-14T21:51:18.7895142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7895219Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7895461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7895542Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7895781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7895864Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7896096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:51:18.7896180Z query_states = self.q(hidden_states) 2025-08-14T21:51:18.7896184Z 2025-08-14T21:51:18.7896284Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7896477Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7896550Z return mod(**inputs) 2025-08-14T21:51:18.7896784Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7896866Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7897121Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7897209Z layer_outputs = layer_module( 2025-08-14T21:51:18.7897432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7897527Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7897772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7897861Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7898095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7898181Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7898413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:51:18.7898488Z key_states = self.k(current_states) 2025-08-14T21:51:18.7898491Z 2025-08-14T21:51:18.7898600Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7898796Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7898868Z return mod(**inputs) 2025-08-14T21:51:18.7899106Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7899182Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7899532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7899621Z layer_outputs = layer_module( 2025-08-14T21:51:18.7899861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7899956Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7900230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7900328Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7900595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7900684Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7900949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:51:18.7901086Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:51:18.7901091Z 2025-08-14T21:51:18.7901209Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7901421Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7901492Z return mod(**inputs) 2025-08-14T21:51:18.7901760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7901841Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7902090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7902173Z layer_outputs = layer_module( 2025-08-14T21:51:18.7902390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7902474Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7902706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7902785Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7903029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7903142Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7903395Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:51:18.7903503Z value_states = self.v(current_states) 2025-08-14T21:51:18.7903522Z 2025-08-14T21:51:18.7903632Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7903886Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7903958Z return mod(**inputs) 2025-08-14T21:51:18.7904214Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7904302Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7904556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7904638Z layer_outputs = layer_module( 2025-08-14T21:51:18.7904873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7904959Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7905221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7905307Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7905565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7905658Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7905908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:51:18.7906030Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:51:18.7906035Z 2025-08-14T21:51:18.7906143Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7906354Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7906430Z return mod(**inputs) 2025-08-14T21:51:18.7906687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7906775Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7907029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7907105Z layer_outputs = layer_module( 2025-08-14T21:51:18.7907346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7907431Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7907681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7907773Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7908024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7908117Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7908367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:51:18.7908481Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:51:18.7908485Z 2025-08-14T21:51:18.7908601Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7908809Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7908877Z return mod(**inputs) 2025-08-14T21:51:18.7909127Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7909195Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7909460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7909531Z layer_outputs = layer_module( 2025-08-14T21:51:18.7909758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7909859Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7910106Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7910193Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7910426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7910503Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7910743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:51:18.7910817Z attn_output = self.o(attn_output) 2025-08-14T21:51:18.7910821Z 2025-08-14T21:51:18.7910923Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7911120Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7911184Z return mod(**inputs) 2025-08-14T21:51:18.7911433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7911503Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7911727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7911803Z layer_outputs = layer_module( 2025-08-14T21:51:18.7912009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7912092Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7912317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.7912394Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.7912630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.7912711Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.7912936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:51:18.7913017Z query_states = self.q(hidden_states) 2025-08-14T21:51:18.7913021Z 2025-08-14T21:51:18.7913118Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7913314Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7913377Z return mod(**inputs) 2025-08-14T21:51:18.7913608Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7913689Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7913922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7913993Z layer_outputs = layer_module( 2025-08-14T21:51:18.7914211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7914288Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7914528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.7914603Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.7914825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.7914913Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.7915153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:51:18.7915237Z key_states = self.k(current_states) 2025-08-14T21:51:18.7915255Z 2025-08-14T21:51:18.7915354Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7915555Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7915624Z return mod(**inputs) 2025-08-14T21:51:18.7915873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7915946Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7916180Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7916247Z layer_outputs = layer_module( 2025-08-14T21:51:18.7916460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7916535Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7916761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.7916846Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.7917073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.7917155Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.7917384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:51:18.7917503Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:51:18.7917507Z 2025-08-14T21:51:18.7917610Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7917793Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7917855Z return mod(**inputs) 2025-08-14T21:51:18.7918099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7918169Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7918404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7918471Z layer_outputs = layer_module( 2025-08-14T21:51:18.7918682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7918766Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7918992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.7919067Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.7919304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.7919383Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.7919617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:51:18.7919692Z value_states = self.v(current_states) 2025-08-14T21:51:18.7919696Z 2025-08-14T21:51:18.7919792Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7919984Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7920046Z return mod(**inputs) 2025-08-14T21:51:18.7920284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7920355Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7920587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7920680Z layer_outputs = layer_module( 2025-08-14T21:51:18.7920903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7920993Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7921239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.7921317Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.7921561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.7921641Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.7921863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:51:18.7921973Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:51:18.7921977Z 2025-08-14T21:51:18.7922075Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7922271Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7922336Z return mod(**inputs) 2025-08-14T21:51:18.7922563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7922642Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7922869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7922938Z layer_outputs = layer_module( 2025-08-14T21:51:18.7923152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7923228Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7923466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.7923546Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.7923776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.7923867Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.7924100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:51:18.7924205Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:51:18.7924216Z 2025-08-14T21:51:18.7924317Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7924507Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7924579Z return mod(**inputs) 2025-08-14T21:51:18.7924814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7924887Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7925131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7925204Z layer_outputs = layer_module( 2025-08-14T21:51:18.7925423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7925502Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7925733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.7925819Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.7926050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.7926132Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.7926371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:51:18.7926464Z attn_output = self.o(attn_output) 2025-08-14T21:51:18.7926468Z 2025-08-14T21:51:18.7926590Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7926784Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7926873Z return mod(**inputs) 2025-08-14T21:51:18.7927134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7927207Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7927440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7927517Z layer_outputs = layer_module( 2025-08-14T21:51:18.7927726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7927810Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7928040Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.7928120Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.7928360Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 524, in forward 2025-08-14T21:51:18.7928489Z layer_output = hidden_states + self.dropout(attention_output[0]) 2025-08-14T21:51:18.7928493Z 2025-08-14T21:51:18.7928599Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7928791Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7928855Z return mod(**inputs) 2025-08-14T21:51:18.7929097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7929168Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7929402Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7929481Z layer_outputs = layer_module( 2025-08-14T21:51:18.7929693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7929776Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7930005Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7930093Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7930331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7930443Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7930681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 183, in forward 2025-08-14T21:51:18.7930778Z hidden_gelu = self.act(self.wi_0(hidden_states)) 2025-08-14T21:51:18.7930782Z 2025-08-14T21:51:18.7930883Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7931084Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7931148Z return mod(**inputs) 2025-08-14T21:51:18.7931383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7931462Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7931697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7931774Z layer_outputs = layer_module( 2025-08-14T21:51:18.7931987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7932064Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7932320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7932424Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7932671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7932798Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7933043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 184, in forward 2025-08-14T21:51:18.7933131Z hidden_linear = self.wi_1(hidden_states) 2025-08-14T21:51:18.7933134Z 2025-08-14T21:51:18.7933234Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7933428Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7933500Z return mod(**inputs) 2025-08-14T21:51:18.7933737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7933818Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7934052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7934123Z layer_outputs = layer_module( 2025-08-14T21:51:18.7934351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7934428Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7934664Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7934761Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7935000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7935124Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7935363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 185, in forward 2025-08-14T21:51:18.7935452Z hidden_states = hidden_gelu * hidden_linear 2025-08-14T21:51:18.7935458Z 2025-08-14T21:51:18.7935568Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7935767Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7935838Z return mod(**inputs) 2025-08-14T21:51:18.7936080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7936154Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7936406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7936476Z layer_outputs = layer_module( 2025-08-14T21:51:18.7936701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7936787Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7937028Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7937127Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7937366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7937480Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7937733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 198, in forward 2025-08-14T21:51:18.7937811Z hidden_states = self.wo(hidden_states) 2025-08-14T21:51:18.7937814Z 2025-08-14T21:51:18.7937927Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7938142Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7938224Z return mod(**inputs) 2025-08-14T21:51:18.7938471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7938564Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7938818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7938898Z layer_outputs = layer_module( 2025-08-14T21:51:18.7939116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7939204Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7939554Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7939654Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7939928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7940019Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7940286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:51:18.7940381Z query_states = self.q(hidden_states) 2025-08-14T21:51:18.7940387Z 2025-08-14T21:51:18.7940502Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7940729Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7940801Z return mod(**inputs) 2025-08-14T21:51:18.7941068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7941161Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7941427Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7941520Z layer_outputs = layer_module( 2025-08-14T21:51:18.7941892Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7941992Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7942261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7942350Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7942605Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7942705Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7942964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:51:18.7943058Z key_states = self.k(current_states) 2025-08-14T21:51:18.7943064Z 2025-08-14T21:51:18.7943177Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7943397Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7943483Z return mod(**inputs) 2025-08-14T21:51:18.7943743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7943841Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7944101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7944181Z layer_outputs = layer_module( 2025-08-14T21:51:18.7944429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7944515Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7944836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7944938Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7945221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7945341Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7945632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:51:18.7945776Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:51:18.7945781Z 2025-08-14T21:51:18.7945905Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7946123Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7946206Z return mod(**inputs) 2025-08-14T21:51:18.7946473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7946554Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7946828Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7946906Z layer_outputs = layer_module( 2025-08-14T21:51:18.7947143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7947240Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7947501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7947597Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7947857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7947944Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7948213Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:51:18.7948299Z value_states = self.v(current_states) 2025-08-14T21:51:18.7948303Z 2025-08-14T21:51:18.7948414Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7948639Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7948713Z return mod(**inputs) 2025-08-14T21:51:18.7948986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7949065Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7949327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7949411Z layer_outputs = layer_module( 2025-08-14T21:51:18.7949652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7949744Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7950006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7950095Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7950362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7950449Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7950703Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:51:18.7950827Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:51:18.7950831Z 2025-08-14T21:51:18.7950942Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7951166Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7951258Z return mod(**inputs) 2025-08-14T21:51:18.7951501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7951609Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7951872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7951956Z layer_outputs = layer_module( 2025-08-14T21:51:18.7952177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7952253Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7952489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7952566Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7952797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7952885Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7953116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:51:18.7953229Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:51:18.7953233Z 2025-08-14T21:51:18.7953335Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7953527Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7953599Z return mod(**inputs) 2025-08-14T21:51:18.7953833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7953905Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7954147Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7954217Z layer_outputs = layer_module( 2025-08-14T21:51:18.7954437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7954515Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7954748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7954837Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7955066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7955153Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7955386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:51:18.7955461Z attn_output = self.o(attn_output) 2025-08-14T21:51:18.7955464Z 2025-08-14T21:51:18.7955573Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7955765Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7955830Z return mod(**inputs) 2025-08-14T21:51:18.7956072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7956145Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7956388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7956457Z layer_outputs = layer_module( 2025-08-14T21:51:18.7956670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7956756Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7956988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.7957082Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.7957333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.7957432Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.7957685Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:51:18.7957777Z query_states = self.q(hidden_states) 2025-08-14T21:51:18.7957781Z 2025-08-14T21:51:18.7957882Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7958080Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7958144Z return mod(**inputs) 2025-08-14T21:51:18.7958384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7958456Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7958690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7958769Z layer_outputs = layer_module( 2025-08-14T21:51:18.7958988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7959066Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7959306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.7959384Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.7959619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.7959699Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.7959928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:51:18.7960013Z key_states = self.k(current_states) 2025-08-14T21:51:18.7960016Z 2025-08-14T21:51:18.7960116Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7960314Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7960378Z return mod(**inputs) 2025-08-14T21:51:18.7960611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7960690Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7960922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7960992Z layer_outputs = layer_module( 2025-08-14T21:51:18.7961214Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7961291Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7961532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.7961611Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.7961840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.7961931Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.7962162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:51:18.7962285Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:51:18.7962296Z 2025-08-14T21:51:18.7962396Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7962585Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7962655Z return mod(**inputs) 2025-08-14T21:51:18.7962907Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7962996Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7963241Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7963327Z layer_outputs = layer_module( 2025-08-14T21:51:18.7963566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7963645Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7963876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.7963961Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.7964191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.7964272Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.7964511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:51:18.7964588Z value_states = self.v(current_states) 2025-08-14T21:51:18.7964593Z 2025-08-14T21:51:18.7964699Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7964893Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7964956Z return mod(**inputs) 2025-08-14T21:51:18.7965195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7965267Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7965507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7965584Z layer_outputs = layer_module( 2025-08-14T21:51:18.7965799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7965882Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7966113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.7966191Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.7966432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.7966512Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.7966750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:51:18.7966854Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:51:18.7966857Z 2025-08-14T21:51:18.7966957Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7967161Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7967228Z return mod(**inputs) 2025-08-14T21:51:18.7967467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7967551Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7967795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7967875Z layer_outputs = layer_module( 2025-08-14T21:51:18.7968093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7968170Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7968416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.7968496Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.7968759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.7968859Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.7969098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:51:18.7969232Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:51:18.7969249Z 2025-08-14T21:51:18.7969353Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7969552Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7969626Z return mod(**inputs) 2025-08-14T21:51:18.7969868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7969949Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7970192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7970264Z layer_outputs = layer_module( 2025-08-14T21:51:18.7970493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7970573Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7970811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.7970899Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.7971137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.7971227Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.7971464Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:51:18.7971541Z attn_output = self.o(attn_output) 2025-08-14T21:51:18.7971547Z 2025-08-14T21:51:18.7971657Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7971858Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7971931Z return mod(**inputs) 2025-08-14T21:51:18.7972171Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7972245Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7972491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7972562Z layer_outputs = layer_module( 2025-08-14T21:51:18.7972783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7972868Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7973109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7973209Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7973447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7973564Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7973810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 183, in forward 2025-08-14T21:51:18.7973910Z hidden_gelu = self.act(self.wi_0(hidden_states)) 2025-08-14T21:51:18.7973914Z 2025-08-14T21:51:18.7974024Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7974230Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7974295Z return mod(**inputs) 2025-08-14T21:51:18.7974569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7974647Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7974902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7974998Z layer_outputs = layer_module( 2025-08-14T21:51:18.7975231Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7975318Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7975558Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7975648Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7975890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7976003Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7976248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 184, in forward 2025-08-14T21:51:18.7976327Z hidden_linear = self.wi_1(hidden_states) 2025-08-14T21:51:18.7976330Z 2025-08-14T21:51:18.7976433Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7976637Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7976703Z return mod(**inputs) 2025-08-14T21:51:18.7976945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7977027Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7977267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7977346Z layer_outputs = layer_module( 2025-08-14T21:51:18.7977568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7977646Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7977890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7977978Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7978215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7978337Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7978576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 185, in forward 2025-08-14T21:51:18.7978670Z hidden_states = hidden_gelu * hidden_linear 2025-08-14T21:51:18.7978674Z 2025-08-14T21:51:18.7978775Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7978975Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7979051Z return mod(**inputs) 2025-08-14T21:51:18.7979304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7979391Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7979991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7980093Z layer_outputs = layer_module( 2025-08-14T21:51:18.7980344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7980429Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7980752Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7980864Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7981136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.7981279Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.7981518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 198, in forward 2025-08-14T21:51:18.7981616Z hidden_states = self.wo(hidden_states) 2025-08-14T21:51:18.7981620Z 2025-08-14T21:51:18.7981752Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7981958Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7982033Z return mod(**inputs) 2025-08-14T21:51:18.7982278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7982352Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7982600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7982673Z layer_outputs = layer_module( 2025-08-14T21:51:18.7982893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7982980Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7983219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.7983314Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.7983552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 217, in forward 2025-08-14T21:51:18.7983679Z hidden_states = hidden_states + self.dropout(forwarded_states) 2025-08-14T21:51:18.7983683Z 2025-08-14T21:51:18.7983793Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7983992Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7984065Z return mod(**inputs) 2025-08-14T21:51:18.7984308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7984384Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7984631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7984704Z layer_outputs = layer_module( 2025-08-14T21:51:18.7984930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7985018Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7985255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7985345Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7985581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7985666Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7985916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:51:18.7986023Z query_states = self.q(hidden_states) 2025-08-14T21:51:18.7986026Z 2025-08-14T21:51:18.7986132Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7986338Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7986405Z return mod(**inputs) 2025-08-14T21:51:18.7986654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7986727Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7986991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7987071Z layer_outputs = layer_module( 2025-08-14T21:51:18.7987304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7987404Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7987652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7988707Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7988950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7989032Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7989263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:51:18.7989345Z key_states = self.k(current_states) 2025-08-14T21:51:18.7989349Z 2025-08-14T21:51:18.7989453Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7989653Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7989720Z return mod(**inputs) 2025-08-14T21:51:18.7989963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7990047Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7990280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7990351Z layer_outputs = layer_module( 2025-08-14T21:51:18.7990573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7990651Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7990887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7990968Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7991196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7991287Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7991526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:51:18.7991659Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:51:18.7991663Z 2025-08-14T21:51:18.7991765Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7991957Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7992030Z return mod(**inputs) 2025-08-14T21:51:18.7992262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7992337Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7992577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7992649Z layer_outputs = layer_module( 2025-08-14T21:51:18.7992870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7992950Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7993180Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7993267Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7993501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7993590Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7993851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:51:18.7993933Z value_states = self.v(current_states) 2025-08-14T21:51:18.7993953Z 2025-08-14T21:51:18.7994066Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7994280Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7994347Z return mod(**inputs) 2025-08-14T21:51:18.7994628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7994712Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7994954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7995023Z layer_outputs = layer_module( 2025-08-14T21:51:18.7995234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7995319Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7995546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7995627Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7995872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7995956Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7996202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:51:18.7996308Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:51:18.7996311Z 2025-08-14T21:51:18.7996413Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7996619Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7996684Z return mod(**inputs) 2025-08-14T21:51:18.7996935Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7997009Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7997250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7997331Z layer_outputs = layer_module( 2025-08-14T21:51:18.7997552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.7997629Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.7997877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.7997956Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.7998204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.7998288Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.7998527Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:51:18.7998643Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:51:18.7998648Z 2025-08-14T21:51:18.7998750Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.7998957Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.7999023Z return mod(**inputs) 2025-08-14T21:51:18.7999269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.7999350Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.7999593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.7999664Z layer_outputs = layer_module( 2025-08-14T21:51:18.7999915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8000012Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8000261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.8000358Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.8000634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.8000726Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.8000962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:51:18.8001039Z attn_output = self.o(attn_output) 2025-08-14T21:51:18.8001049Z 2025-08-14T21:51:18.8001152Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8001351Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8001426Z return mod(**inputs) 2025-08-14T21:51:18.8001663Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8001738Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8001987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8002059Z layer_outputs = layer_module( 2025-08-14T21:51:18.8002286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8002365Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8002610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.8002698Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.8002927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.8003011Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.8003250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:51:18.8003329Z query_states = self.q(hidden_states) 2025-08-14T21:51:18.8003334Z 2025-08-14T21:51:18.8003453Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8003649Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8003715Z return mod(**inputs) 2025-08-14T21:51:18.8003962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8004036Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8004283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8004356Z layer_outputs = layer_module( 2025-08-14T21:51:18.8004575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8004665Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8004901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.8004980Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.8005222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.8005305Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.8005545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:51:18.8005620Z key_states = self.k(current_states) 2025-08-14T21:51:18.8005644Z 2025-08-14T21:51:18.8005750Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8005975Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8006059Z return mod(**inputs) 2025-08-14T21:51:18.8006300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8006402Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8006640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8006718Z layer_outputs = layer_module( 2025-08-14T21:51:18.8006934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8007011Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8007256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.8007337Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.8007579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.8007663Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.8007896Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:51:18.8008033Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:51:18.8008036Z 2025-08-14T21:51:18.8008140Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8008335Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8008415Z return mod(**inputs) 2025-08-14T21:51:18.8008655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8008737Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8008976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8009048Z layer_outputs = layer_module( 2025-08-14T21:51:18.8009275Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8009352Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8009592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.8009671Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.8009905Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.8009994Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.8010229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:51:18.8010306Z value_states = self.v(current_states) 2025-08-14T21:51:18.8010310Z 2025-08-14T21:51:18.8010420Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8010620Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8010694Z return mod(**inputs) 2025-08-14T21:51:18.8010933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8011006Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8011252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8011325Z layer_outputs = layer_module( 2025-08-14T21:51:18.8011560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8011648Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8011901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.8012006Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.8012261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.8012343Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.8012591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:51:18.8012699Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:51:18.8012703Z 2025-08-14T21:51:18.8012813Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8013015Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8013081Z return mod(**inputs) 2025-08-14T21:51:18.8013331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8013407Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8013649Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8013734Z layer_outputs = layer_module( 2025-08-14T21:51:18.8013954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8014039Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8014277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.8014356Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.8014604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.8014688Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.8014933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:51:18.8015044Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:51:18.8015047Z 2025-08-14T21:51:18.8015151Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8015357Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8015422Z return mod(**inputs) 2025-08-14T21:51:18.8015663Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8015747Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8015987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8016068Z layer_outputs = layer_module( 2025-08-14T21:51:18.8016286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8016367Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8016611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.8016692Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.8016928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.8017017Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.8017255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:51:18.8017340Z attn_output = self.o(attn_output) 2025-08-14T21:51:18.8017343Z 2025-08-14T21:51:18.8017472Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8017673Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8017764Z return mod(**inputs) 2025-08-14T21:51:18.8018033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8018114Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8018368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8018441Z layer_outputs = layer_module( 2025-08-14T21:51:18.8018668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8018745Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8018981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.8019079Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.8019324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.8019538Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.8019824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 183, in forward 2025-08-14T21:51:18.8019936Z hidden_gelu = self.act(self.wi_0(hidden_states)) 2025-08-14T21:51:18.8019941Z 2025-08-14T21:51:18.8020062Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8020280Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8020360Z return mod(**inputs) 2025-08-14T21:51:18.8020627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8020710Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8020984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8021065Z layer_outputs = layer_module( 2025-08-14T21:51:18.8021308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8021405Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8021666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.8021775Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.8022084Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.8022214Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.8022463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 184, in forward 2025-08-14T21:51:18.8022544Z hidden_linear = self.wi_1(hidden_states) 2025-08-14T21:51:18.8022548Z 2025-08-14T21:51:18.8022662Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8022879Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8022949Z return mod(**inputs) 2025-08-14T21:51:18.8023224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8023306Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8023572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8023660Z layer_outputs = layer_module( 2025-08-14T21:51:18.8023902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8024017Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8024304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.8024401Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.8024687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.8024826Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.8025097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 185, in forward 2025-08-14T21:51:18.8025200Z hidden_states = hidden_gelu * hidden_linear 2025-08-14T21:51:18.8025204Z 2025-08-14T21:51:18.8025316Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8025539Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8025612Z return mod(**inputs) 2025-08-14T21:51:18.8025874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8025962Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8026223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8026308Z layer_outputs = layer_module( 2025-08-14T21:51:18.8026544Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8026629Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8026891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.8026986Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.8027245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.8027376Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.8027633Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 198, in forward 2025-08-14T21:51:18.8027726Z hidden_states = self.wo(hidden_states) 2025-08-14T21:51:18.8027729Z 2025-08-14T21:51:18.8027842Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8028058Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8028137Z return mod(**inputs) 2025-08-14T21:51:18.8028397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8028481Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8028742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8028821Z layer_outputs = layer_module( 2025-08-14T21:51:18.8029067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8029155Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8029422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.8029511Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.8029748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.8029835Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.8030064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:51:18.8030139Z query_states = self.q(hidden_states) 2025-08-14T21:51:18.8030143Z 2025-08-14T21:51:18.8030273Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8030467Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8030551Z return mod(**inputs) 2025-08-14T21:51:18.8030803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8030890Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8031143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8031215Z layer_outputs = layer_module( 2025-08-14T21:51:18.8031428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8031513Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8031743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.8031831Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.8032060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.8032142Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.8032380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:51:18.8032455Z key_states = self.k(current_states) 2025-08-14T21:51:18.8032459Z 2025-08-14T21:51:18.8032561Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8032762Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8032825Z return mod(**inputs) 2025-08-14T21:51:18.8033068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8033141Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8033377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8033459Z layer_outputs = layer_module( 2025-08-14T21:51:18.8033672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8033751Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8033993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.8034074Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.8034313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.8034393Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.8034625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:51:18.8034760Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:51:18.8034764Z 2025-08-14T21:51:18.8034866Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8035067Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8035132Z return mod(**inputs) 2025-08-14T21:51:18.8035369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8035451Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8035686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8035757Z layer_outputs = layer_module( 2025-08-14T21:51:18.8035979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8036075Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8036314Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.8036412Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.8036657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.8036760Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.8036990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:51:18.8037075Z value_states = self.v(current_states) 2025-08-14T21:51:18.8037079Z 2025-08-14T21:51:18.8037181Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8037374Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8037447Z return mod(**inputs) 2025-08-14T21:51:18.8037682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8037757Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8038001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8038074Z layer_outputs = layer_module( 2025-08-14T21:51:18.8038296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8038374Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8038603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.8038688Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.8038924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.8039005Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.8039248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:51:18.8039358Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:51:18.8039363Z 2025-08-14T21:51:18.8039472Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8039671Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8039737Z return mod(**inputs) 2025-08-14T21:51:18.8039986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8040061Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8040306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8040378Z layer_outputs = layer_module( 2025-08-14T21:51:18.8040598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8040686Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8040922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.8041003Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.8041250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.8041332Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.8041590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:51:18.8041705Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:51:18.8041709Z 2025-08-14T21:51:18.8042035Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8042339Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8042455Z return mod(**inputs) 2025-08-14T21:51:18.8042724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8042848Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8043151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8043240Z layer_outputs = layer_module( 2025-08-14T21:51:18.8043489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8043575Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8043842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.8043940Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.8044200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.8044287Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.8044567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:51:18.8044654Z attn_output = self.o(attn_output) 2025-08-14T21:51:18.8044659Z 2025-08-14T21:51:18.8044763Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8044966Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8045031Z return mod(**inputs) 2025-08-14T21:51:18.8045273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8045356Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8045595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8045668Z layer_outputs = layer_module( 2025-08-14T21:51:18.8045899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8045978Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8046225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.8046307Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.8046542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 485, in forward 2025-08-14T21:51:18.8046683Z hidden_states = hidden_states + self.dropout(attention_output[0]) 2025-08-14T21:51:18.8046686Z 2025-08-14T21:51:18.8046788Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8046986Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8047060Z return mod(**inputs) 2025-08-14T21:51:18.8047317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8047404Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8047658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8047734Z layer_outputs = layer_module( 2025-08-14T21:51:18.8047972Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8048055Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8048322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.8048402Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.8048657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.8048767Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.8049003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:51:18.8049098Z query_states = self.q(hidden_states) 2025-08-14T21:51:18.8049101Z 2025-08-14T21:51:18.8049228Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8049427Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8049502Z return mod(**inputs) 2025-08-14T21:51:18.8049743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8049817Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8050067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8050138Z layer_outputs = layer_module( 2025-08-14T21:51:18.8050361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8050449Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8050686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.8050778Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.8051017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.8051103Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.8051347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:51:18.8051423Z key_states = self.k(current_states) 2025-08-14T21:51:18.8051427Z 2025-08-14T21:51:18.8051541Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8051744Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8051811Z return mod(**inputs) 2025-08-14T21:51:18.8052061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8052137Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8052379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8052458Z layer_outputs = layer_module( 2025-08-14T21:51:18.8052680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8052765Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8053007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.8053087Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.8053334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.8053418Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.8053661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:51:18.8053790Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:51:18.8053793Z 2025-08-14T21:51:18.8053896Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8054101Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8054166Z return mod(**inputs) 2025-08-14T21:51:18.8054410Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8054513Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8054750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8054847Z layer_outputs = layer_module( 2025-08-14T21:51:18.8055079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8055171Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8055415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.8055497Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.8055729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.8055820Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.8056052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:51:18.8056137Z value_states = self.v(current_states) 2025-08-14T21:51:18.8056143Z 2025-08-14T21:51:18.8056245Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8056443Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8056516Z return mod(**inputs) 2025-08-14T21:51:18.8056755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8056834Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8057070Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8057142Z layer_outputs = layer_module( 2025-08-14T21:51:18.8057363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8057442Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8057678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.8057771Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.8058008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.8058099Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.8058334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:51:18.8058441Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:51:18.8058444Z 2025-08-14T21:51:18.8058554Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8058750Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8058823Z return mod(**inputs) 2025-08-14T21:51:18.8059063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8059137Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8059381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8059509Z layer_outputs = layer_module( 2025-08-14T21:51:18.8059741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8059828Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8060074Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.8060165Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.8060414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.8060523Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.8060801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:51:18.8060932Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:51:18.8060936Z 2025-08-14T21:51:18.8061052Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8061278Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8061351Z return mod(**inputs) 2025-08-14T21:51:18.8061612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8061690Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8061942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8062027Z layer_outputs = layer_module( 2025-08-14T21:51:18.8062248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8062341Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8062579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.8062658Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.8062910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.8062992Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.8063225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:51:18.8063312Z attn_output = self.o(attn_output) 2025-08-14T21:51:18.8063315Z 2025-08-14T21:51:18.8063416Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8063621Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8063688Z return mod(**inputs) 2025-08-14T21:51:18.8063924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8064007Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8064245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8064325Z layer_outputs = layer_module( 2025-08-14T21:51:18.8064543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8064620Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8064862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.8064955Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.8065189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.8065317Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.8065552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 183, in forward 2025-08-14T21:51:18.8065660Z hidden_gelu = self.act(self.wi_0(hidden_states)) 2025-08-14T21:51:18.8065664Z 2025-08-14T21:51:18.8065765Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8065969Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8066042Z return mod(**inputs) 2025-08-14T21:51:18.8066273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8066344Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8066609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8066698Z layer_outputs = layer_module( 2025-08-14T21:51:18.8066917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8067013Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8067294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.8067391Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.8067620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.8067738Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.8067969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 184, in forward 2025-08-14T21:51:18.8068045Z hidden_linear = self.wi_1(hidden_states) 2025-08-14T21:51:18.8068050Z 2025-08-14T21:51:18.8068159Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8068347Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8068412Z return mod(**inputs) 2025-08-14T21:51:18.8068654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8068725Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8068962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8069029Z layer_outputs = layer_module( 2025-08-14T21:51:18.8069239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8069323Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8069552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.8069645Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.8069874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.8069984Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.8070217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 185, in forward 2025-08-14T21:51:18.8070300Z hidden_states = hidden_gelu * hidden_linear 2025-08-14T21:51:18.8070304Z 2025-08-14T21:51:18.8070403Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8070604Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8070667Z return mod(**inputs) 2025-08-14T21:51:18.8070917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8070988Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8071227Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8071305Z layer_outputs = layer_module( 2025-08-14T21:51:18.8071522Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8071606Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8071841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.8071928Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.8072175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.8072304Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.8072563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 198, in forward 2025-08-14T21:51:18.8072671Z hidden_states = self.wo(hidden_states) 2025-08-14T21:51:18.8072674Z 2025-08-14T21:51:18.8072777Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8072996Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8073063Z return mod(**inputs) 2025-08-14T21:51:18.8073306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8073387Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8073627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8073698Z layer_outputs = layer_module( 2025-08-14T21:51:18.8073924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8074004Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8074262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.8074342Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.8074576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.8074665Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.8074896Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:51:18.8074980Z query_states = self.q(hidden_states) 2025-08-14T21:51:18.8074984Z 2025-08-14T21:51:18.8075084Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8075278Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8075352Z return mod(**inputs) 2025-08-14T21:51:18.8075584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8075658Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8075901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8075971Z layer_outputs = layer_module( 2025-08-14T21:51:18.8076187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8076264Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8076490Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.8076577Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.8076807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.8076887Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.8077123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:51:18.8077199Z key_states = self.k(current_states) 2025-08-14T21:51:18.8077204Z 2025-08-14T21:51:18.8077312Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8077506Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8077570Z return mod(**inputs) 2025-08-14T21:51:18.8077808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8077879Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8078133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8078220Z layer_outputs = layer_module( 2025-08-14T21:51:18.8078432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8078534Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8078779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.8078859Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.8079099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.8079179Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.8079422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:51:18.8079550Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:51:18.8079553Z 2025-08-14T21:51:18.8079655Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8079859Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8079926Z return mod(**inputs) 2025-08-14T21:51:18.8080173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8080245Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8080481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8080559Z layer_outputs = layer_module( 2025-08-14T21:51:18.8080773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8080848Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8081088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.8081169Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.8081410Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.8081493Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.8081736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:51:18.8081822Z value_states = self.v(current_states) 2025-08-14T21:51:18.8081826Z 2025-08-14T21:51:18.8081930Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8082147Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8082211Z return mod(**inputs) 2025-08-14T21:51:18.8082448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8082526Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8082767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8082838Z layer_outputs = layer_module( 2025-08-14T21:51:18.8083066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8083143Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8083396Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.8083473Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.8083706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.8083793Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.8084043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:51:18.8084164Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:51:18.8084173Z 2025-08-14T21:51:18.8084292Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8084482Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8084576Z return mod(**inputs) 2025-08-14T21:51:18.8084810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8084881Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8085119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8085188Z layer_outputs = layer_module( 2025-08-14T21:51:18.8085408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8085485Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8085717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.8085802Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.8086032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.8086111Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.8086348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:51:18.8086451Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:51:18.8086455Z 2025-08-14T21:51:18.8086561Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8086754Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8086819Z return mod(**inputs) 2025-08-14T21:51:18.8087062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8087135Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8087374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8087448Z layer_outputs = layer_module( 2025-08-14T21:51:18.8087666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8087752Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8087986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-08-14T21:51:18.8088076Z self_attention_outputs = self.layer[0]( 2025-08-14T21:51:18.8088323Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-08-14T21:51:18.8088402Z attention_output = self.SelfAttention( 2025-08-14T21:51:18.8088640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:51:18.8088718Z attn_output = self.o(attn_output) 2025-08-14T21:51:18.8088721Z 2025-08-14T21:51:18.8088828Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8089039Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8089106Z return mod(**inputs) 2025-08-14T21:51:18.8089345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8089425Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8089663Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8089759Z layer_outputs = layer_module( 2025-08-14T21:51:18.8089984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8090082Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8090342Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.8090440Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.8090686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.8090771Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.8091008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 365, in forward 2025-08-14T21:51:18.8091095Z query_states = self.q(hidden_states) 2025-08-14T21:51:18.8091099Z 2025-08-14T21:51:18.8091204Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8091406Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8091483Z return mod(**inputs) 2025-08-14T21:51:18.8091728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8091811Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8092053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8092126Z layer_outputs = layer_module( 2025-08-14T21:51:18.8092352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8092430Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8092667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.8092758Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.8092997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.8093088Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.8093329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 385, in forward 2025-08-14T21:51:18.8093406Z key_states = self.k(current_states) 2025-08-14T21:51:18.8093409Z 2025-08-14T21:51:18.8093521Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8093717Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8093791Z return mod(**inputs) 2025-08-14T21:51:18.8094035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8094109Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8094360Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8094434Z layer_outputs = layer_module( 2025-08-14T21:51:18.8094651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8094739Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8094976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.8095065Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.8095309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.8095394Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.8095658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-08-14T21:51:18.8095787Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:51:18.8095806Z 2025-08-14T21:51:18.8095919Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8096114Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8096217Z return mod(**inputs) 2025-08-14T21:51:18.8096480Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8096555Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8096791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8096871Z layer_outputs = layer_module( 2025-08-14T21:51:18.8097087Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8097173Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8097408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.8097489Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.8097730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.8097815Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.8098050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 386, in forward 2025-08-14T21:51:18.8098135Z value_states = self.v(current_states) 2025-08-14T21:51:18.8098138Z 2025-08-14T21:51:18.8098239Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8098443Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8098507Z return mod(**inputs) 2025-08-14T21:51:18.8098745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8098827Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8099064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8099145Z layer_outputs = layer_module( 2025-08-14T21:51:18.8099365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8099509Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8099772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.8099860Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.8100113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.8100219Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.8100470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-08-14T21:51:18.8100601Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:51:18.8100607Z 2025-08-14T21:51:18.8100719Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8100939Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8101025Z return mod(**inputs) 2025-08-14T21:51:18.8101286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8101371Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8101613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8101689Z layer_outputs = layer_module( 2025-08-14T21:51:18.8101939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8102034Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8102274Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.8102381Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.8102637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.8102727Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.8102965Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-08-14T21:51:18.8103071Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:51:18.8103075Z 2025-08-14T21:51:18.8103186Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8103385Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8103453Z return mod(**inputs) 2025-08-14T21:51:18.8103703Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8103780Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8104029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8104112Z layer_outputs = layer_module( 2025-08-14T21:51:18.8104325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8104412Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8104647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.8104736Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.8104970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-08-14T21:51:18.8105051Z attention_output = self.EncDecAttention( 2025-08-14T21:51:18.8105295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 444, in forward 2025-08-14T21:51:18.8105370Z attn_output = self.o(attn_output) 2025-08-14T21:51:18.8105375Z 2025-08-14T21:51:18.8105475Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8105678Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8105741Z return mod(**inputs) 2025-08-14T21:51:18.8105981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8106053Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8106288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8106368Z layer_outputs = layer_module( 2025-08-14T21:51:18.8106581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8106665Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8106900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-08-14T21:51:18.8106977Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:51:18.8107214Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 524, in forward 2025-08-14T21:51:18.8107341Z layer_output = hidden_states + self.dropout(attention_output[0]) 2025-08-14T21:51:18.8107345Z 2025-08-14T21:51:18.8107445Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8107663Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8107728Z return mod(**inputs) 2025-08-14T21:51:18.8107985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8108085Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8108337Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8108416Z layer_outputs = layer_module( 2025-08-14T21:51:18.8108634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8108709Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8108945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.8109033Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.8109272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.8109386Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.8109615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 183, in forward 2025-08-14T21:51:18.8109724Z hidden_gelu = self.act(self.wi_0(hidden_states)) 2025-08-14T21:51:18.8109728Z 2025-08-14T21:51:18.8109826Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8110025Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8110090Z return mod(**inputs) 2025-08-14T21:51:18.8110321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8110401Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8110635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8110706Z layer_outputs = layer_module( 2025-08-14T21:51:18.8110924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8111002Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8111239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.8111326Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.8111555Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.8111672Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.8111901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 184, in forward 2025-08-14T21:51:18.8111985Z hidden_linear = self.wi_1(hidden_states) 2025-08-14T21:51:18.8111988Z 2025-08-14T21:51:18.8112087Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8112277Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8112350Z return mod(**inputs) 2025-08-14T21:51:18.8112584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8112657Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8112897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8112966Z layer_outputs = layer_module( 2025-08-14T21:51:18.8113186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8113262Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8113505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.8113614Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.8113842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.8113979Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.8114223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 185, in forward 2025-08-14T21:51:18.8114311Z hidden_states = hidden_gelu * hidden_linear 2025-08-14T21:51:18.8114314Z 2025-08-14T21:51:18.8114422Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8114614Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8114677Z return mod(**inputs) 2025-08-14T21:51:18.8114920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-08-14T21:51:18.8114991Z decoder_outputs = self.decoder( 2025-08-14T21:51:18.8115234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-08-14T21:51:18.8115304Z layer_outputs = layer_module( 2025-08-14T21:51:18.8115518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:51:18.8115607Z return super().__call__(*args, **kwargs) 2025-08-14T21:51:18.8115837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 609, in forward 2025-08-14T21:51:18.8115923Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:51:18.8116162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 216, in forward 2025-08-14T21:51:18.8116275Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:51:18.8116518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 198, in forward 2025-08-14T21:51:18.8116597Z hidden_states = self.wo(hidden_states) 2025-08-14T21:51:18.8116602Z 2025-08-14T21:51:18.8116703Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8116903Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8116967Z return mod(**inputs) 2025-08-14T21:51:18.8117208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1816, in forward 2025-08-14T21:51:18.8117294Z lm_logits = self.lm_head(sequence_output) 2025-08-14T21:51:18.8117297Z 2025-08-14T21:51:18.8117397Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:51:18.8117596Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:51:18.8117662Z return mod(**inputs) 2025-08-14T21:51:18.8117906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1823, in forward 2025-08-14T21:51:18.8118053Z loss = loss_fct(lm_logits.view(-1, lm_logits.size(-1)), labels.view(-1)) 2025-08-14T21:51:18.8118058Z 2025-08-14T21:51:30.2054328Z Compilation time (from dynamo_timed): 26.725845337 2025-08-14T21:51:30.2268481Z pass 2025-08-14T21:51:30.2268933Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:51:30.2269717Z TIMING: _recursive_pre_grad_passes:0.09249 _recursive_joint_graph_passes:0.75627 _recursive_post_grad_passes:0.26248 async_compile.wait:0.83442 code_gen:11.35681 inductor_compile:14.10669 backend_compile:22.42276 gc:0.00059 entire_frame_compile:26.72585 total_wall_time:26.72585 2025-08-14T21:51:30.2270967Z STATS: call_* op count: 1189 | FakeTensorMode.__torch_dispatch__:50742 | FakeTensor.__torch_dispatch__:8076 | ProxyTorchDispatchMode.__torch_dispatch__:12602 2025-08-14T21:51:30.2271488Z Dynamo produced 1 graphs covering 1189 ops with 0 graph breaks (0 unique) 2025-08-14T21:51:36.1849853Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:51:36.1850902Z from pkg_resources import resource_filename 2025-08-14T21:51:36.8963825Z 2025-08-14T21:51:36.9089462Z loading model: 0it [00:00, ?it/s]If you want to use `MegatronBertForCausalLM` as a standalone, add `is_decoder=True.` 2025-08-14T21:51:36.9092023Z WARNING:transformers.models.megatron_bert.modeling_megatron_bert:If you want to use `MegatronBertForCausalLM` as a standalone, add `is_decoder=True.` 2025-08-14T21:51:40.3399933Z 2025-08-14T21:51:40.3402908Z loading model: 0it [00:03, ?it/s] 2025-08-14T21:51:40.3426047Z cpu eval MegatronBertForCausalLM 2025-08-14T21:51:42.0671893Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:51:42.6730542Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:51:43.2785113Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:52:01.7586769Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7590130Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7590463Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7590711Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7591014Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7594832Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7595206Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7600453Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7602743Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7604395Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7610473Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:01.7615483Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:01.7616112Z return mod(**inputs) 2025-08-14T21:52:01.7616674Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:52:01.7617174Z outputs = self.bert( 2025-08-14T21:52:01.7617625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:01.7618119Z encoder_outputs = self.encoder( 2025-08-14T21:52:01.7618596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:01.7619062Z layer_outputs = layer_module( 2025-08-14T21:52:01.7619449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:01.7620036Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:01.7620647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:01.7621137Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:01.7621594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:01.7622048Z return forward_fn(*input_tensors) 2025-08-14T21:52:01.7622544Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:01.7623070Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:01.7623907Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:01.7624498Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:01.7624985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:01.7625375Z return self.act(input) 2025-08-14T21:52:01.7625512Z 2025-08-14T21:52:01.7625653Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7625902Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7626128Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7626360Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7626589Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7626815Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7627032Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7627256Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7627483Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7627700Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7627925Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7628186Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:01.7628585Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:01.7628960Z return mod(**inputs) 2025-08-14T21:52:01.7629404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:52:01.7629861Z outputs = self.bert( 2025-08-14T21:52:01.7630290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:01.7630760Z encoder_outputs = self.encoder( 2025-08-14T21:52:01.7631226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:01.7631682Z layer_outputs = layer_module( 2025-08-14T21:52:01.7632070Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:01.7632472Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:01.7632940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:01.7633414Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:01.7633858Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:01.7634292Z return forward_fn(*input_tensors) 2025-08-14T21:52:01.7634784Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:01.7635301Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:01.7635805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:01.7636318Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:01.7636739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:01.7637114Z return self.act(input) 2025-08-14T21:52:01.7637241Z 2025-08-14T21:52:01.7637329Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7637563Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7637781Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7638007Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7638231Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7638446Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7638669Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7638931Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7639151Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7639400Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7639624Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7639904Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:01.7640323Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:01.7640688Z return mod(**inputs) 2025-08-14T21:52:01.7641132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:52:01.7641586Z outputs = self.bert( 2025-08-14T21:52:01.7642334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:01.7642810Z encoder_outputs = self.encoder( 2025-08-14T21:52:01.7643294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:01.7643753Z layer_outputs = layer_module( 2025-08-14T21:52:01.7644145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:01.7644555Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:01.7645034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:01.7645502Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:01.7645949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:01.7646387Z return forward_fn(*input_tensors) 2025-08-14T21:52:01.7646885Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:01.7647414Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:01.7647922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:01.7648425Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:01.7648841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:01.7649213Z return self.act(input) 2025-08-14T21:52:01.7649334Z 2025-08-14T21:52:01.7649429Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7649660Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7649884Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7650109Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7650336Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7650555Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7650781Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7651006Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7651226Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7651452Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7651675Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7651922Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:01.7652318Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:01.7652680Z return mod(**inputs) 2025-08-14T21:52:01.7653117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:52:01.7653564Z outputs = self.bert( 2025-08-14T21:52:01.7653995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:01.7654514Z encoder_outputs = self.encoder( 2025-08-14T21:52:01.7654964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:01.7655451Z layer_outputs = layer_module( 2025-08-14T21:52:01.7655863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:01.7656297Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:01.7656760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:01.7657237Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:01.7657686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:01.7658121Z return forward_fn(*input_tensors) 2025-08-14T21:52:01.7658611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:01.7659142Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:01.7659726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:01.7660243Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:01.7660658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:01.7661034Z return self.act(input) 2025-08-14T21:52:01.7661156Z 2025-08-14T21:52:01.7661253Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7661477Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7661712Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7661946Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7662166Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7662403Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7662633Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7662864Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7663082Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7663311Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7663537Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7663788Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:01.7664192Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:01.7664584Z return mod(**inputs) 2025-08-14T21:52:01.7665024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:52:01.7665477Z outputs = self.bert( 2025-08-14T21:52:01.7665919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:01.7666386Z encoder_outputs = self.encoder( 2025-08-14T21:52:01.7666848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:01.7667311Z layer_outputs = layer_module( 2025-08-14T21:52:01.7667696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:01.7668094Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:01.7668611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:01.7669100Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:01.7669542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:01.7669976Z return forward_fn(*input_tensors) 2025-08-14T21:52:01.7670518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:01.7671067Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:01.7671575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:01.7672077Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:01.7672486Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:01.7672869Z return self.act(input) 2025-08-14T21:52:01.7672993Z 2025-08-14T21:52:01.7673092Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7673317Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7673552Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7673786Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7674008Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7674236Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7674465Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7674690Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7674914Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7675139Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7675368Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7675622Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:01.7676027Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:01.7676386Z return mod(**inputs) 2025-08-14T21:52:01.7676817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:52:01.7677276Z outputs = self.bert( 2025-08-14T21:52:01.7677708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:01.7678173Z encoder_outputs = self.encoder( 2025-08-14T21:52:01.7678619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:01.7679078Z layer_outputs = layer_module( 2025-08-14T21:52:01.7679461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:01.7679863Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:01.7680328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:01.7680810Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:01.7681253Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:01.7681692Z return forward_fn(*input_tensors) 2025-08-14T21:52:01.7682184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:01.7682718Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:01.7683216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:01.7683715Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:01.7684139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:01.7684528Z return self.act(input) 2025-08-14T21:52:01.7684653Z 2025-08-14T21:52:01.7684751Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7684975Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7685205Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7685460Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7685678Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7685923Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7686142Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7686372Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7686590Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7686828Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7687045Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7687305Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:01.7687704Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:01.7688060Z return mod(**inputs) 2025-08-14T21:52:01.7688503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:52:01.7688963Z outputs = self.bert( 2025-08-14T21:52:01.7689402Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:01.7689861Z encoder_outputs = self.encoder( 2025-08-14T21:52:01.7690320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:01.7690792Z layer_outputs = layer_module( 2025-08-14T21:52:01.7691177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:01.7691564Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:01.7692041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:01.7692514Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:01.7692961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:01.7693397Z return forward_fn(*input_tensors) 2025-08-14T21:52:01.7693893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:01.7694425Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:01.7694913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:01.7695432Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:01.7695855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:01.7696242Z return self.act(input) 2025-08-14T21:52:01.7696363Z 2025-08-14T21:52:01.7696450Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7696684Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7696915Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7697136Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7697385Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7697611Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7697840Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7698063Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7698293Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7698518Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7698739Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7699002Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:01.7699409Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:01.7699890Z return mod(**inputs) 2025-08-14T21:52:01.7700378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:52:01.7700849Z outputs = self.bert( 2025-08-14T21:52:01.7701324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:01.7701812Z encoder_outputs = self.encoder( 2025-08-14T21:52:01.7702303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:01.7702774Z layer_outputs = layer_module( 2025-08-14T21:52:01.7703166Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:01.7703588Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:01.7704059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:01.7704556Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:01.7705011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:01.7705445Z return forward_fn(*input_tensors) 2025-08-14T21:52:01.7705928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:01.7706445Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:01.7706931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:01.7707421Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:01.7707828Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:01.7708207Z return self.act(input) 2025-08-14T21:52:01.7708323Z 2025-08-14T21:52:01.7708410Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7708634Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7708859Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7709070Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7709290Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7709509Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7709723Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7709942Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7710160Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7710370Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7710589Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7710842Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:01.7711231Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:01.7711571Z return mod(**inputs) 2025-08-14T21:52:01.7712002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:52:01.7712452Z outputs = self.bert( 2025-08-14T21:52:01.7712868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:01.7713326Z encoder_outputs = self.encoder( 2025-08-14T21:52:01.7713777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:01.7714230Z layer_outputs = layer_module( 2025-08-14T21:52:01.7714598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:01.7714990Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:01.7715447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:01.7715946Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:01.7716377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:01.7716836Z return forward_fn(*input_tensors) 2025-08-14T21:52:01.7717348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:01.7717851Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:01.7718337Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:01.7718831Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:01.7719241Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:01.7719602Z return self.act(input) 2025-08-14T21:52:01.7719733Z 2025-08-14T21:52:01.7719823Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7720056Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7720279Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7720504Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7720732Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7720957Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7721177Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7721401Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7721630Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7721847Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7722073Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7722330Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:01.7722717Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:01.7723074Z return mod(**inputs) 2025-08-14T21:52:01.7723504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:52:01.7723953Z outputs = self.bert( 2025-08-14T21:52:01.7724377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:01.7724831Z encoder_outputs = self.encoder( 2025-08-14T21:52:01.7725292Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:01.7725736Z layer_outputs = layer_module( 2025-08-14T21:52:01.7726123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:01.7726516Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:01.7726976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:01.7727438Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:01.7727880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:01.7728309Z return forward_fn(*input_tensors) 2025-08-14T21:52:01.7728791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:01.7729306Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:01.7729804Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:01.7730301Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:01.7730707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:01.7731120Z return self.act(input) 2025-08-14T21:52:01.7731248Z 2025-08-14T21:52:01.7731349Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7731573Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7731809Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7732027Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7732246Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7732478Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7732698Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7732924Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7733143Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7733398Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7733622Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7733879Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:01.7734282Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:01.7734645Z return mod(**inputs) 2025-08-14T21:52:01.7735085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:52:01.7735542Z outputs = self.bert( 2025-08-14T21:52:01.7735993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:01.7736468Z encoder_outputs = self.encoder( 2025-08-14T21:52:01.7736927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:01.7737420Z layer_outputs = layer_module( 2025-08-14T21:52:01.7737800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:01.7738204Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:01.7738676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:01.7739156Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:01.7739695Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:01.7740149Z return forward_fn(*input_tensors) 2025-08-14T21:52:01.7740638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:01.7741185Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:01.7741687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:01.7742423Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:01.7742850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:01.7743240Z return self.act(input) 2025-08-14T21:52:01.7743362Z 2025-08-14T21:52:01.7743459Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7743689Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7743921Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7744152Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7744383Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7744609Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7744839Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7745067Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7745285Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7745515Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7745740Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7745993Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:01.7746468Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:01.7746861Z return mod(**inputs) 2025-08-14T21:52:01.7747306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:52:01.7747784Z outputs = self.bert( 2025-08-14T21:52:01.7748246Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:01.7748729Z encoder_outputs = self.encoder( 2025-08-14T21:52:01.7749187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:01.7749656Z layer_outputs = layer_module( 2025-08-14T21:52:01.7750012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:01.7750382Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:01.7750802Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:01.7751244Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:01.7751658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:01.7752063Z return forward_fn(*input_tensors) 2025-08-14T21:52:01.7752506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:01.7752988Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:01.7753453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:01.7753936Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:01.7754345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:01.7754709Z return self.act(input) 2025-08-14T21:52:01.7754824Z 2025-08-14T21:52:01.7754922Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7755153Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7755365Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7755576Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7755778Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7755984Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7756191Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7756391Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7756597Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7756802Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7757012Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7757248Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:01.7757620Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:01.7757955Z return mod(**inputs) 2025-08-14T21:52:01.7758362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:52:01.7758781Z outputs = self.bert( 2025-08-14T21:52:01.7759182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:01.7759607Z encoder_outputs = self.encoder( 2025-08-14T21:52:01.7760019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:01.7760440Z layer_outputs = layer_module( 2025-08-14T21:52:01.7760815Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:01.7761175Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:01.7761619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:01.7762070Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:01.7762495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:01.7762898Z return forward_fn(*input_tensors) 2025-08-14T21:52:01.7763372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:01.7763891Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:01.7764346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:01.7764802Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:01.7765186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:01.7765533Z return self.act(input) 2025-08-14T21:52:01.7765643Z 2025-08-14T21:52:01.7765837Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7766043Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7766260Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7766475Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7766682Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7766900Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7767113Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7767318Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7767531Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7767744Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7767950Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7768188Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:01.7768562Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:01.7768896Z return mod(**inputs) 2025-08-14T21:52:01.7769293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:52:01.7769716Z outputs = self.bert( 2025-08-14T21:52:01.7770122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:01.7770544Z encoder_outputs = self.encoder( 2025-08-14T21:52:01.7770967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:01.7771394Z layer_outputs = layer_module( 2025-08-14T21:52:01.7771750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:01.7772111Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:01.7772564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:01.7773031Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:01.7773468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:01.7773872Z return forward_fn(*input_tensors) 2025-08-14T21:52:01.7774326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:01.7774818Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:01.7775324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:01.7775838Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:01.7776242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:01.7776646Z return self.act(input) 2025-08-14T21:52:01.7776763Z 2025-08-14T21:52:01.7776865Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7777097Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7777328Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7777547Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7777779Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7778006Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7778231Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7778451Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7778676Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7778910Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7779129Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7779391Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:01.7779892Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:01.7780251Z return mod(**inputs) 2025-08-14T21:52:01.7780712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:52:01.7781183Z outputs = self.bert( 2025-08-14T21:52:01.7781611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:01.7782061Z encoder_outputs = self.encoder( 2025-08-14T21:52:01.7782523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:01.7782982Z layer_outputs = layer_module( 2025-08-14T21:52:01.7783360Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:01.7783748Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:01.7784213Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:01.7784686Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:01.7785110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:01.7785535Z return forward_fn(*input_tensors) 2025-08-14T21:52:01.7786014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:01.7786530Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:01.7787011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:01.7787515Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:01.7787898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:01.7788239Z return self.act(input) 2025-08-14T21:52:01.7788357Z 2025-08-14T21:52:01.7788437Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7788654Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7788864Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7789063Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7789271Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7789479Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7789680Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7789889Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7790127Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7790329Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7790553Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7790792Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:01.7791188Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:01.7791511Z return mod(**inputs) 2025-08-14T21:52:01.7791936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:52:01.7792359Z outputs = self.bert( 2025-08-14T21:52:01.7792748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:01.7793175Z encoder_outputs = self.encoder( 2025-08-14T21:52:01.7793599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:01.7794022Z layer_outputs = layer_module( 2025-08-14T21:52:01.7794366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:01.7794731Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:01.7795158Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:01.7795586Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:01.7795982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:01.7796376Z return forward_fn(*input_tensors) 2025-08-14T21:52:01.7796824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:01.7797301Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:01.7797753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:01.7798210Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:01.7798601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:01.7798936Z return self.act(input) 2025-08-14T21:52:01.7799057Z 2025-08-14T21:52:01.7799137Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7799352Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7799556Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7799765Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7799973Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7800203Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7800402Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7800611Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7800816Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7801022Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7801229Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7801466Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:01.7801828Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:01.7802158Z return mod(**inputs) 2025-08-14T21:52:01.7802562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:52:01.7802979Z outputs = self.bert( 2025-08-14T21:52:01.7803370Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:01.7803798Z encoder_outputs = self.encoder( 2025-08-14T21:52:01.7804244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:01.7804717Z layer_outputs = layer_module( 2025-08-14T21:52:01.7805062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:01.7805449Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:01.7805894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:01.7806325Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:01.7806735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:01.7807134Z return forward_fn(*input_tensors) 2025-08-14T21:52:01.7807586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:01.7808063Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:01.7808517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:01.7808977Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:01.7809361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:01.7809700Z return self.act(input) 2025-08-14T21:52:01.7809819Z 2025-08-14T21:52:01.7809900Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7810114Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7810317Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7810524Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7810733Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7810935Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7811144Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7811352Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7811551Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7811756Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7811963Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7812197Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:01.7812560Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:01.7812890Z return mod(**inputs) 2025-08-14T21:52:01.7813290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:52:01.7813704Z outputs = self.bert( 2025-08-14T21:52:01.7814106Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:01.7814536Z encoder_outputs = self.encoder( 2025-08-14T21:52:01.7814958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:01.7815384Z layer_outputs = layer_module( 2025-08-14T21:52:01.7815737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:01.7816105Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:01.7816539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:01.7816966Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:01.7817372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:01.7817771Z return forward_fn(*input_tensors) 2025-08-14T21:52:01.7818244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:01.7818741Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:01.7819237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:01.7819873Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:01.7820275Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:01.7820651Z return self.act(input) 2025-08-14T21:52:01.7820768Z 2025-08-14T21:52:01.7820862Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7821090Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7821304Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7821525Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7821744Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7821959Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7822179Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7822403Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7822615Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7822833Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7823050Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7823291Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:01.7823681Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:01.7824038Z return mod(**inputs) 2025-08-14T21:52:01.7824475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:52:01.7824917Z outputs = self.bert( 2025-08-14T21:52:01.7825342Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:01.7825798Z encoder_outputs = self.encoder( 2025-08-14T21:52:01.7826249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:01.7826698Z layer_outputs = layer_module( 2025-08-14T21:52:01.7827069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:01.7827454Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:01.7827904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:01.7828374Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:01.7828805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:01.7829225Z return forward_fn(*input_tensors) 2025-08-14T21:52:01.7829694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:01.7830206Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:01.7830686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:01.7831146Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:01.7831523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:01.7831866Z return self.act(input) 2025-08-14T21:52:01.7831977Z 2025-08-14T21:52:01.7832068Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7832273Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7832487Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7832699Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7832927Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7833166Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7833373Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7833578Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7833796Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7834002Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7834227Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7834458Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:01.7834827Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:01.7835268Z return mod(**inputs) 2025-08-14T21:52:01.7835732Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:52:01.7836167Z outputs = self.bert( 2025-08-14T21:52:01.7836575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:01.7837008Z encoder_outputs = self.encoder( 2025-08-14T21:52:01.7837425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:01.7837854Z layer_outputs = layer_module( 2025-08-14T21:52:01.7838219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:01.7838581Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:01.7839012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:01.7839450Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:01.7839859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:01.7840252Z return forward_fn(*input_tensors) 2025-08-14T21:52:01.7840699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:01.7841179Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:01.7841631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:01.7842299Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:01.7842707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:01.7843073Z return self.act(input) 2025-08-14T21:52:01.7843198Z 2025-08-14T21:52:01.7843285Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7843490Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7843706Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7843922Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7844126Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7844334Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7844545Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7844747Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7844955Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7845165Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7845365Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7845603Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:01.7845974Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:01.7846307Z return mod(**inputs) 2025-08-14T21:52:01.7846706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:52:01.7847209Z outputs = self.bert( 2025-08-14T21:52:01.7847610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:01.7848062Z encoder_outputs = self.encoder( 2025-08-14T21:52:01.7848510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:01.7848966Z layer_outputs = layer_module( 2025-08-14T21:52:01.7849327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:01.7849693Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:01.7850127Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:01.7850563Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:01.7850968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:01.7851363Z return forward_fn(*input_tensors) 2025-08-14T21:52:01.7851810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:01.7852296Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:01.7852743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:01.7853204Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:01.7853591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:01.7853930Z return self.act(input) 2025-08-14T21:52:01.7854039Z 2025-08-14T21:52:01.7854118Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7854331Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7854542Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7854743Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7854950Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7855155Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7855365Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7855564Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7855771Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7855979Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7856179Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7856414Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:01.7856778Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:01.7857102Z return mod(**inputs) 2025-08-14T21:52:01.7857508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:52:01.7857928Z outputs = self.bert( 2025-08-14T21:52:01.7858325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:01.7858745Z encoder_outputs = self.encoder( 2025-08-14T21:52:01.7859167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:01.7859667Z layer_outputs = layer_module( 2025-08-14T21:52:01.7860039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:01.7860429Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:01.7860881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:01.7861317Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:01.7861765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:01.7862180Z return forward_fn(*input_tensors) 2025-08-14T21:52:01.7862653Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:01.7863153Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:01.7863594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:01.7864054Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:01.7864431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:01.7864762Z return self.act(input) 2025-08-14T21:52:01.7864880Z 2025-08-14T21:52:01.7864962Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7865193Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7865404Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7865603Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7865815Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7866022Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7866227Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7866429Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7866630Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7866825Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7867024Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7867252Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:01.7867608Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:01.7867922Z return mod(**inputs) 2025-08-14T21:52:01.7868313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:52:01.7868721Z outputs = self.bert( 2025-08-14T21:52:01.7869108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:01.7869522Z encoder_outputs = self.encoder( 2025-08-14T21:52:01.7869933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:01.7870339Z layer_outputs = layer_module( 2025-08-14T21:52:01.7870671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:01.7871023Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:01.7871438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:01.7871863Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:01.7872250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:01.7872633Z return forward_fn(*input_tensors) 2025-08-14T21:52:01.7873072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:01.7873528Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:01.7873969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:01.7874413Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:01.7874781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:01.7875103Z return self.act(input) 2025-08-14T21:52:01.7875245Z 2025-08-14T21:52:01.7875324Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7875548Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7875745Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7875966Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7876168Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7876374Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7876585Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7876788Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7876991Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7877186Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7877388Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7877618Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:01.7877972Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:01.7878301Z return mod(**inputs) 2025-08-14T21:52:01.7878711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1064, in forward 2025-08-14T21:52:01.7879136Z outputs = self.bert( 2025-08-14T21:52:01.7879538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:01.7879952Z encoder_outputs = self.encoder( 2025-08-14T21:52:01.7880362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:01.7880769Z layer_outputs = layer_module( 2025-08-14T21:52:01.7881115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:01.7881475Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:01.7881892Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:01.7882311Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:01.7882709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:01.7883098Z return forward_fn(*input_tensors) 2025-08-14T21:52:01.7883536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:01.7883997Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:01.7884440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:01.7884893Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:01.7885259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:01.7885599Z return self.act(input) 2025-08-14T21:52:01.7885716Z 2025-08-14T21:52:01.7885792Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7885999Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7886203Z cudagraph partition due to non gpu ops 2025-08-14T21:52:01.7886435Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:01.7886798Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:01.7887110Z return mod(**inputs) 2025-08-14T21:52:01.7887503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1086, in forward 2025-08-14T21:52:01.7887918Z lm_loss = self.loss_function( 2025-08-14T21:52:01.7888284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/loss/loss_utils.py", line 67, in ForCausalLMLoss 2025-08-14T21:52:01.7888806Z loss = fixed_cross_entropy(logits, shift_labels, num_items_in_batch, ignore_index, **kwargs) 2025-08-14T21:52:01.7889306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/loss/loss_utils.py", line 36, in fixed_cross_entropy 2025-08-14T21:52:01.7889837Z loss = nn.functional.cross_entropy(source, target, ignore_index=ignore_index, reduction=reduction) 2025-08-14T21:52:01.7890101Z 2025-08-14T21:52:14.1377072Z Compilation time (from dynamo_timed): 28.911248336 2025-08-14T21:52:14.1436006Z pass 2025-08-14T21:52:14.1436445Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:52:14.1437325Z TIMING: _recursive_pre_grad_passes:0.07229 _recursive_joint_graph_passes:0.81167 _recursive_post_grad_passes:0.12891 async_compile.wait:0.91399 code_gen:10.82864 inductor_compile:14.0407 backend_compile:23.17876 gc:0.00018 entire_frame_compile:28.91125 total_wall_time:28.91125 2025-08-14T21:52:14.1438310Z STATS: call_* op count: 723 | FakeTensorMode.__torch_dispatch__:51441 | FakeTensor.__torch_dispatch__:7316 | ProxyTorchDispatchMode.__torch_dispatch__:12522 2025-08-14T21:52:14.1438838Z Dynamo produced 1 graphs covering 723 ops with 0 graph breaks (0 unique) 2025-08-14T21:52:20.3618411Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:52:20.3619665Z from pkg_resources import resource_filename 2025-08-14T21:52:20.9799107Z 2025-08-14T21:52:24.2360554Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:52:24.2361595Z loading model: 0it [00:03, ?it/s] 2025-08-14T21:52:24.2389063Z cpu eval MegatronBertForQuestionAnswering 2025-08-14T21:52:25.9313623Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:52:26.6231713Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:52:27.2601710Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:52:46.1529159Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1531857Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1532142Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1532417Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1532652Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1532879Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1533104Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1533317Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1533536Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1533756Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1534009Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:46.1534427Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:46.1534803Z return mod(**inputs) 2025-08-14T21:52:46.1535283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:52:46.1535751Z outputs = self.bert( 2025-08-14T21:52:46.1536206Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:46.1536693Z encoder_outputs = self.encoder( 2025-08-14T21:52:46.1537179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:46.1537640Z layer_outputs = layer_module( 2025-08-14T21:52:46.1538037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:46.1538724Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:46.1539212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:46.1539949Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:46.1540482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:46.1540983Z return forward_fn(*input_tensors) 2025-08-14T21:52:46.1541482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:46.1542242Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:46.1542732Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:46.1543242Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:46.1543667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:46.1544054Z return self.act(input) 2025-08-14T21:52:46.1544185Z 2025-08-14T21:52:46.1544284Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1544531Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1544767Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1545026Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1545241Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1545454Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1545661Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1545876Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1546092Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1546299Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1546516Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1546767Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:46.1547152Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:46.1547507Z return mod(**inputs) 2025-08-14T21:52:46.1547941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:52:46.1548384Z outputs = self.bert( 2025-08-14T21:52:46.1548802Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:46.1549250Z encoder_outputs = self.encoder( 2025-08-14T21:52:46.1549692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:46.1550131Z layer_outputs = layer_module( 2025-08-14T21:52:46.1550524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:46.1550928Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:46.1551398Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:46.1551863Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:46.1552320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:46.1552757Z return forward_fn(*input_tensors) 2025-08-14T21:52:46.1553241Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:46.1553757Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:46.1554254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:46.1554796Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:46.1555233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:46.1555633Z return self.act(input) 2025-08-14T21:52:46.1555842Z 2025-08-14T21:52:46.1555931Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1556166Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1556416Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1556646Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1556871Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1557085Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1557349Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1557574Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1557798Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1558016Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1558243Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1558505Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:46.1558902Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:46.1559263Z return mod(**inputs) 2025-08-14T21:52:46.1559708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:52:46.1560174Z outputs = self.bert( 2025-08-14T21:52:46.1560602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:46.1561067Z encoder_outputs = self.encoder( 2025-08-14T21:52:46.1561526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:46.1561996Z layer_outputs = layer_module( 2025-08-14T21:52:46.1562376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:46.1562781Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:46.1563251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:46.1563726Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:46.1564178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:46.1564610Z return forward_fn(*input_tensors) 2025-08-14T21:52:46.1565093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:46.1565611Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:46.1566105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:46.1566604Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:46.1567024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:46.1567389Z return self.act(input) 2025-08-14T21:52:46.1567629Z 2025-08-14T21:52:46.1567720Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1567959Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1568179Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1568413Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1568643Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1568863Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1569092Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1569334Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1569563Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1569805Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1570025Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1570301Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:46.1570688Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:46.1571064Z return mod(**inputs) 2025-08-14T21:52:46.1571527Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:52:46.1571983Z outputs = self.bert( 2025-08-14T21:52:46.1572405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:46.1572876Z encoder_outputs = self.encoder( 2025-08-14T21:52:46.1573338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:46.1573790Z layer_outputs = layer_module( 2025-08-14T21:52:46.1574255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:46.1574743Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:46.1575480Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:46.1576018Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:46.1576553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:46.1577103Z return forward_fn(*input_tensors) 2025-08-14T21:52:46.1577672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:46.1578243Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:46.1578956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:46.1579644Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:46.1580180Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:46.1580614Z return self.act(input) 2025-08-14T21:52:46.1580794Z 2025-08-14T21:52:46.1580916Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1581234Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1581521Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1581826Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1582133Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1593185Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1593465Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1593695Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1593938Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1594170Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1594396Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1594665Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:46.1595103Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:46.1595480Z return mod(**inputs) 2025-08-14T21:52:46.1595973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:52:46.1596459Z outputs = self.bert( 2025-08-14T21:52:46.1596910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:46.1597375Z encoder_outputs = self.encoder( 2025-08-14T21:52:46.1597970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:46.1598447Z layer_outputs = layer_module( 2025-08-14T21:52:46.1598881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:46.1599313Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:46.1599815Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:46.1600297Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:46.1600738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:46.1601175Z return forward_fn(*input_tensors) 2025-08-14T21:52:46.1601670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:46.1602205Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:46.1602696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:46.1603210Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:46.1603640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:46.1604020Z return self.act(input) 2025-08-14T21:52:46.1604148Z 2025-08-14T21:52:46.1604243Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1604481Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1604709Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1604924Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1605148Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1605372Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1605587Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1605815Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1606044Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1606273Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1606487Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1606754Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:46.1607159Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:46.1607525Z return mod(**inputs) 2025-08-14T21:52:46.1607969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:52:46.1608426Z outputs = self.bert( 2025-08-14T21:52:46.1608849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:46.1609314Z encoder_outputs = self.encoder( 2025-08-14T21:52:46.1609774Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:46.1610240Z layer_outputs = layer_module( 2025-08-14T21:52:46.1610616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:46.1611021Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:46.1611488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:46.1611963Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:46.1612398Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:46.1612834Z return forward_fn(*input_tensors) 2025-08-14T21:52:46.1613351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:46.1613874Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:46.1614382Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:46.1614908Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:46.1615365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:46.1615738Z return self.act(input) 2025-08-14T21:52:46.1615871Z 2025-08-14T21:52:46.1615960Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1616190Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1616418Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1616640Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1616869Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1617097Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1617313Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1617534Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1617757Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1617973Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1618198Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1618455Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:46.1618848Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:46.1619207Z return mod(**inputs) 2025-08-14T21:52:46.1619829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:52:46.1620307Z outputs = self.bert( 2025-08-14T21:52:46.1620735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:46.1621208Z encoder_outputs = self.encoder( 2025-08-14T21:52:46.1621660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:46.1622111Z layer_outputs = layer_module( 2025-08-14T21:52:46.1622482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:46.1622875Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:46.1623334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:46.1623789Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:46.1624224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:46.1624648Z return forward_fn(*input_tensors) 2025-08-14T21:52:46.1625129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:46.1625633Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:46.1626117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:46.1626613Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:46.1627022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:46.1627384Z return self.act(input) 2025-08-14T21:52:46.1627510Z 2025-08-14T21:52:46.1627597Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1627823Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1628037Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1628258Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1628477Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1628721Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1628943Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1629181Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1629398Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1629626Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1629844Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1630235Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:46.1630619Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:46.1630974Z return mod(**inputs) 2025-08-14T21:52:46.1631410Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:52:46.1631858Z outputs = self.bert( 2025-08-14T21:52:46.1632277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:46.1632741Z encoder_outputs = self.encoder( 2025-08-14T21:52:46.1633201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:46.1633660Z layer_outputs = layer_module( 2025-08-14T21:52:46.1634044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:46.1634441Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:46.1634906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:46.1635372Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:46.1635813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:46.1636243Z return forward_fn(*input_tensors) 2025-08-14T21:52:46.1636728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:46.1637254Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:46.1637755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:46.1638254Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:46.1638665Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:46.1639040Z return self.act(input) 2025-08-14T21:52:46.1639169Z 2025-08-14T21:52:46.1639255Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1639484Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1639703Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1639928Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1640160Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1640380Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1640612Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1640840Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1641060Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1641289Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1641517Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1642008Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:46.1642421Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:46.1642798Z return mod(**inputs) 2025-08-14T21:52:46.1643244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:52:46.1643696Z outputs = self.bert( 2025-08-14T21:52:46.1644260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:46.1644763Z encoder_outputs = self.encoder( 2025-08-14T21:52:46.1645232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:46.1645715Z layer_outputs = layer_module( 2025-08-14T21:52:46.1646137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:46.1646546Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:46.1647014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:46.1647498Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:46.1647955Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:46.1648401Z return forward_fn(*input_tensors) 2025-08-14T21:52:46.1648869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:46.1649381Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:46.1649873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:46.1650372Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:46.1650768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:46.1651133Z return self.act(input) 2025-08-14T21:52:46.1651249Z 2025-08-14T21:52:46.1651342Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1651563Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1651774Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1651992Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1652209Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1652422Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1652638Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1652861Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1653068Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1653286Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1653506Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1653745Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:46.1654135Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:46.1654494Z return mod(**inputs) 2025-08-14T21:52:46.1654929Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:52:46.1655368Z outputs = self.bert( 2025-08-14T21:52:46.1655794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:46.1656250Z encoder_outputs = self.encoder( 2025-08-14T21:52:46.1656696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:46.1657150Z layer_outputs = layer_module( 2025-08-14T21:52:46.1657517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:46.1657910Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:46.1658359Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:46.1658829Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:46.1659285Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:46.1659780Z return forward_fn(*input_tensors) 2025-08-14T21:52:46.1660276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:46.1660805Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:46.1661305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:46.1661820Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:46.1662219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:46.1662584Z return self.act(input) 2025-08-14T21:52:46.1662701Z 2025-08-14T21:52:46.1662795Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1663011Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1663233Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1663452Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1663668Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1663890Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1664110Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1664329Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1664542Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1664762Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1664984Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1665212Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:46.1665586Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:46.1665911Z return mod(**inputs) 2025-08-14T21:52:46.1666310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:52:46.1666788Z outputs = self.bert( 2025-08-14T21:52:46.1667216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:46.1667676Z encoder_outputs = self.encoder( 2025-08-14T21:52:46.1668110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:46.1668536Z layer_outputs = layer_module( 2025-08-14T21:52:46.1668906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:46.1669285Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:46.1669717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:46.1670163Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:46.1670567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:46.1670957Z return forward_fn(*input_tensors) 2025-08-14T21:52:46.1671405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:46.1671891Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:46.1672343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:46.1672791Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:46.1673176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:46.1673516Z return self.act(input) 2025-08-14T21:52:46.1673626Z 2025-08-14T21:52:46.1673709Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1673943Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1674153Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1674376Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1674576Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1674826Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1675044Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1675338Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1675549Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1675757Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1675958Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1676195Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:46.1676560Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:46.1676889Z return mod(**inputs) 2025-08-14T21:52:46.1677306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:52:46.1677762Z outputs = self.bert( 2025-08-14T21:52:46.1678165Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:46.1678584Z encoder_outputs = self.encoder( 2025-08-14T21:52:46.1679007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:46.1679428Z layer_outputs = layer_module( 2025-08-14T21:52:46.1679780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:46.1680139Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:46.1680567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:46.1681005Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:46.1681408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:46.1681798Z return forward_fn(*input_tensors) 2025-08-14T21:52:46.1682255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:46.1682736Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:46.1683181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:46.1683640Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:46.1684018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:46.1684357Z return self.act(input) 2025-08-14T21:52:46.1684468Z 2025-08-14T21:52:46.1684551Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1684764Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1684973Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1685172Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1685384Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1685593Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1685802Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1686000Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1686209Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1686415Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1686614Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1686850Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:46.1687219Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:46.1687537Z return mod(**inputs) 2025-08-14T21:52:46.1687966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:52:46.1688399Z outputs = self.bert( 2025-08-14T21:52:46.1688799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:46.1689243Z encoder_outputs = self.encoder( 2025-08-14T21:52:46.1689709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:46.1690175Z layer_outputs = layer_module( 2025-08-14T21:52:46.1690517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:46.1690903Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:46.1691361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:46.1691830Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:46.1692259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:46.1692684Z return forward_fn(*input_tensors) 2025-08-14T21:52:46.1693163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:46.1693690Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:46.1694170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:46.1694665Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:46.1695085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:46.1695455Z return self.act(input) 2025-08-14T21:52:46.1695581Z 2025-08-14T21:52:46.1695668Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1695895Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1696114Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1696329Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1696547Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1696768Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1696981Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1697199Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1697419Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1697628Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1697849Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1698094Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:46.1698494Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:46.1698832Z return mod(**inputs) 2025-08-14T21:52:46.1699258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:52:46.1699806Z outputs = self.bert( 2025-08-14T21:52:46.1700234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:46.1700711Z encoder_outputs = self.encoder( 2025-08-14T21:52:46.1701188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:46.1701646Z layer_outputs = layer_module( 2025-08-14T21:52:46.1702008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:46.1702403Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:46.1702896Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:46.1703380Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:46.1703806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:46.1704249Z return forward_fn(*input_tensors) 2025-08-14T21:52:46.1704742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:46.1705246Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:46.1705727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:46.1706228Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:46.1706630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:46.1706987Z return self.act(input) 2025-08-14T21:52:46.1707116Z 2025-08-14T21:52:46.1707200Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1707422Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1707636Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1707852Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1708072Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1708291Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1708503Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1708719Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1708938Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1709147Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1709364Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1709617Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:46.1710003Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:46.1710359Z return mod(**inputs) 2025-08-14T21:52:46.1710786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:52:46.1711227Z outputs = self.bert( 2025-08-14T21:52:46.1711649Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:46.1712078Z encoder_outputs = self.encoder( 2025-08-14T21:52:46.1712516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:46.1712959Z layer_outputs = layer_module( 2025-08-14T21:52:46.1713329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:46.1713718Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:46.1714153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:46.1714591Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:46.1714997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:46.1715395Z return forward_fn(*input_tensors) 2025-08-14T21:52:46.1715836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:46.1716339Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:46.1716811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:46.1717300Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:46.1717736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:46.1718117Z return self.act(input) 2025-08-14T21:52:46.1718232Z 2025-08-14T21:52:46.1718348Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1718565Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1718792Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1719032Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1719252Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1719463Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1719684Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1719904Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1720112Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1720334Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1720553Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1720817Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:46.1721207Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:46.1721550Z return mod(**inputs) 2025-08-14T21:52:46.1721972Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:52:46.1722417Z outputs = self.bert( 2025-08-14T21:52:46.1722807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:46.1723251Z encoder_outputs = self.encoder( 2025-08-14T21:52:46.1723706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:46.1724159Z layer_outputs = layer_module( 2025-08-14T21:52:46.1724525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:46.1724918Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:46.1725374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:46.1725834Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:46.1726255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:46.1726673Z return forward_fn(*input_tensors) 2025-08-14T21:52:46.1727145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:46.1727645Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:46.1728132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:46.1728594Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:46.1728989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:46.1729354Z return self.act(input) 2025-08-14T21:52:46.1729485Z 2025-08-14T21:52:46.1729570Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1729791Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1730003Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1730221Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1730439Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1730655Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1730865Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1731082Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1731296Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1731504Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1731763Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1732016Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:46.1732433Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:46.1732817Z return mod(**inputs) 2025-08-14T21:52:46.1733265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:52:46.1733713Z outputs = self.bert( 2025-08-14T21:52:46.1734123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:46.1734573Z encoder_outputs = self.encoder( 2025-08-14T21:52:46.1735023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:46.1735471Z layer_outputs = layer_module( 2025-08-14T21:52:46.1735854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:46.1736246Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:46.1736701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:46.1737160Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:46.1737588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:46.1738020Z return forward_fn(*input_tensors) 2025-08-14T21:52:46.1738490Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:46.1738986Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:46.1739476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:46.1740070Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:46.1740481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:46.1740859Z return self.act(input) 2025-08-14T21:52:46.1741001Z 2025-08-14T21:52:46.1741086Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1741312Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1741523Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1741743Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1742175Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1742389Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1742609Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1742829Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1743040Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1743263Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1743488Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1743744Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:46.1744123Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:46.1744474Z return mod(**inputs) 2025-08-14T21:52:46.1744908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:52:46.1745342Z outputs = self.bert( 2025-08-14T21:52:46.1745767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:46.1746217Z encoder_outputs = self.encoder( 2025-08-14T21:52:46.1746651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:46.1747148Z layer_outputs = layer_module( 2025-08-14T21:52:46.1747504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:46.1747900Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:46.1748350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:46.1748815Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:46.1749223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:46.1749617Z return forward_fn(*input_tensors) 2025-08-14T21:52:46.1750052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:46.1750532Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:46.1750982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:46.1751450Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:46.1751844Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:46.1752202Z return self.act(input) 2025-08-14T21:52:46.1752317Z 2025-08-14T21:52:46.1752412Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1752626Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1752847Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1753064Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1753285Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1753495Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1753711Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1753924Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1754132Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1754349Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1754566Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1754803Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:46.1755182Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:46.1755509Z return mod(**inputs) 2025-08-14T21:52:46.1755907Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:52:46.1756311Z outputs = self.bert( 2025-08-14T21:52:46.1756703Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:46.1757121Z encoder_outputs = self.encoder( 2025-08-14T21:52:46.1757531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:46.1757946Z layer_outputs = layer_module( 2025-08-14T21:52:46.1758295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:46.1758660Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:46.1759072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:46.1759503Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:46.1759902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:46.1760298Z return forward_fn(*input_tensors) 2025-08-14T21:52:46.1760742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:46.1761246Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:46.1761693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:46.1762161Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:46.1762560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:46.1762910Z return self.act(input) 2025-08-14T21:52:46.1763020Z 2025-08-14T21:52:46.1763108Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1763313Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1763521Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1763729Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1763927Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1764133Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1764340Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1764539Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1764747Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1764952Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1765159Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1765387Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:46.1765761Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:46.1766091Z return mod(**inputs) 2025-08-14T21:52:46.1766492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:52:46.1766922Z outputs = self.bert( 2025-08-14T21:52:46.1767327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:46.1767761Z encoder_outputs = self.encoder( 2025-08-14T21:52:46.1768182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:46.1768622Z layer_outputs = layer_module( 2025-08-14T21:52:46.1768998Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:46.1769379Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:46.1769844Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:46.1770291Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:46.1770724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:46.1771144Z return forward_fn(*input_tensors) 2025-08-14T21:52:46.1771626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:46.1772143Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:46.1772635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:46.1773133Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:46.1773553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:46.1773928Z return self.act(input) 2025-08-14T21:52:46.1774045Z 2025-08-14T21:52:46.1774129Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1774352Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1774572Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1774790Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1775003Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1775223Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1775467Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1775680Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1775924Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1776143Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1776370Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1776619Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:46.1777024Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:46.1777385Z return mod(**inputs) 2025-08-14T21:52:46.1777815Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:52:46.1778260Z outputs = self.bert( 2025-08-14T21:52:46.1778692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:46.1779143Z encoder_outputs = self.encoder( 2025-08-14T21:52:46.1779708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:46.1780183Z layer_outputs = layer_module( 2025-08-14T21:52:46.1780558Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:46.1780944Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:46.1781406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:46.1781880Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:46.1782301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:46.1782729Z return forward_fn(*input_tensors) 2025-08-14T21:52:46.1783204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:46.1783717Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:46.1784205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:46.1784690Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:46.1785097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:46.1785472Z return self.act(input) 2025-08-14T21:52:46.1785589Z 2025-08-14T21:52:46.1785673Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1785898Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1786118Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1786328Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1786552Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1786770Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1786981Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1787212Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1787428Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1787645Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1787857Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1788105Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:46.1788491Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:46.1788828Z return mod(**inputs) 2025-08-14T21:52:46.1789253Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:52:46.1789705Z outputs = self.bert( 2025-08-14T21:52:46.1790178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:46.1790627Z encoder_outputs = self.encoder( 2025-08-14T21:52:46.1791097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:46.1791555Z layer_outputs = layer_module( 2025-08-14T21:52:46.1791918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:46.1792286Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:46.1792716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:46.1793151Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:46.1793553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:46.1793957Z return forward_fn(*input_tensors) 2025-08-14T21:52:46.1794407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:46.1794888Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:46.1795334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:46.1795798Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:46.1796184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:46.1796516Z return self.act(input) 2025-08-14T21:52:46.1796634Z 2025-08-14T21:52:46.1796712Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1796920Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1797126Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1797326Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1797536Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1797743Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1797944Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1798154Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1798363Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1798564Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1798774Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1799010Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:46.1799378Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:46.1799698Z return mod(**inputs) 2025-08-14T21:52:46.1800103Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:52:46.1800527Z outputs = self.bert( 2025-08-14T21:52:46.1800922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:46.1801351Z encoder_outputs = self.encoder( 2025-08-14T21:52:46.1801771Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:46.1802194Z layer_outputs = layer_module( 2025-08-14T21:52:46.1802540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:46.1802905Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:46.1803336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:46.1803762Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:46.1804166Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:46.1804583Z return forward_fn(*input_tensors) 2025-08-14T21:52:46.1805043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:46.1805553Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:46.1806034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:46.1806532Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:46.1806911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:46.1807251Z return self.act(input) 2025-08-14T21:52:46.1807365Z 2025-08-14T21:52:46.1807443Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1807648Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1807844Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1808047Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1808250Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1808451Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1808657Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1808861Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1809067Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1809268Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1809473Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1809706Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:46.1810062Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:46.1810391Z return mod(**inputs) 2025-08-14T21:52:46.1810792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1597, in forward 2025-08-14T21:52:46.1811206Z outputs = self.bert( 2025-08-14T21:52:46.1811595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 856, in forward 2025-08-14T21:52:46.1812016Z encoder_outputs = self.encoder( 2025-08-14T21:52:46.1812431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 537, in forward 2025-08-14T21:52:46.1812843Z layer_outputs = layer_module( 2025-08-14T21:52:46.1813192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:52:46.1813553Z return super().__call__(*args, **kwargs) 2025-08-14T21:52:46.1813975Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 474, in forward 2025-08-14T21:52:46.1814394Z layer_output = apply_chunking_to_forward( 2025-08-14T21:52:46.1814803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:52:46.1815200Z return forward_fn(*input_tensors) 2025-08-14T21:52:46.1815642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 481, in feed_forward_chunk 2025-08-14T21:52:46.1816113Z intermediate_output = self.intermediate(ln_output) 2025-08-14T21:52:46.1816560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 398, in forward 2025-08-14T21:52:46.1817013Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:52:46.1817382Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:52:46.1817719Z return self.act(input) 2025-08-14T21:52:46.1817835Z 2025-08-14T21:52:46.1817913Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1818123Z cudagraph partition due to non gpu ops 2025-08-14T21:52:46.1818374Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:46.1818764Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:46.1819106Z return mod(**inputs) 2025-08-14T21:52:46.1819617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1629, in forward 2025-08-14T21:52:46.1820136Z start_loss = loss_fct(start_logits, start_positions) 2025-08-14T21:52:46.1820315Z 2025-08-14T21:52:46.1820427Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:52:46.1820811Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:52:46.1821143Z return mod(**inputs) 2025-08-14T21:52:46.1821567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1630, in forward 2025-08-14T21:52:46.1822039Z end_loss = loss_fct(end_logits, end_positions) 2025-08-14T21:52:46.1822197Z 2025-08-14T21:52:57.0393173Z Compilation time (from dynamo_timed): 27.99159406 2025-08-14T21:52:57.0393727Z pass 2025-08-14T21:52:57.0394051Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:52:57.0395009Z TIMING: _recursive_pre_grad_passes:0.07076 _recursive_joint_graph_passes:1.20671 _recursive_post_grad_passes:0.13098 async_compile.wait:0.00365 code_gen:9.59303 inductor_compile:12.84263 backend_compile:22.26228 gc:0.00033 entire_frame_compile:27.99159 total_wall_time:27.99159 2025-08-14T21:52:57.0395935Z STATS: call_* op count: 724 | FakeTensorMode.__torch_dispatch__:51314 | FakeTensor.__torch_dispatch__:7334 | ProxyTorchDispatchMode.__torch_dispatch__:12549 2025-08-14T21:52:57.0396441Z Dynamo produced 1 graphs covering 724 ops with 0 graph breaks (0 unique) 2025-08-14T21:53:03.3371008Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:53:03.3372310Z from pkg_resources import resource_filename 2025-08-14T21:53:03.9539156Z 2025-08-14T21:53:04.6969429Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:53:04.6972444Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:53:04.7038278Z cpu eval MobileBertForMaskedLM 2025-08-14T21:53:04.9782529Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:53:05.1450372Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:53:05.3056943Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:53:40.3959313Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.3963074Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.3963536Z return mod(**inputs) 2025-08-14T21:53:40.3964033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.3964527Z outputs = self.mobilebert( 2025-08-14T21:53:40.3964992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 791, in forward 2025-08-14T21:53:40.3965472Z embedding_output = self.embeddings( 2025-08-14T21:53:40.3965950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 199, in forward 2025-08-14T21:53:40.3966412Z inputs_embeds = torch.cat( 2025-08-14T21:53:40.3966544Z 2025-08-14T21:53:40.3966638Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.3967274Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.3967691Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.3968132Z return mod(**inputs) 2025-08-14T21:53:40.3968571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.3969096Z outputs = self.mobilebert( 2025-08-14T21:53:40.3971098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 791, in forward 2025-08-14T21:53:40.3971673Z embedding_output = self.embeddings( 2025-08-14T21:53:40.3972329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 215, in forward 2025-08-14T21:53:40.3972816Z embeddings = self.LayerNorm(embeddings) 2025-08-14T21:53:40.3973311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.3973915Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.3974190Z 2025-08-14T21:53:40.3974328Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.3974738Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.3975282Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.3975734Z return mod(**inputs) 2025-08-14T21:53:40.3976201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.3976771Z outputs = self.mobilebert( 2025-08-14T21:53:40.3977266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.3977844Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.3978358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.3978854Z layer_outputs = layer_module( 2025-08-14T21:53:40.3979309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:53:40.3980153Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:53:40.3980744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:53:40.3981254Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:53:40.3981745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:53:40.3982215Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:53:40.3982704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.3983182Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.3983358Z 2025-08-14T21:53:40.3983452Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.3983693Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.3983926Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.3984145Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.3984373Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.3984598Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.3984816Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.3985043Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.3985270Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.3985489Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.3985758Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.3986241Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.3986613Z return mod(**inputs) 2025-08-14T21:53:40.3987100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.3987637Z outputs = self.mobilebert( 2025-08-14T21:53:40.3988106Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.3988564Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.3989029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.3989490Z layer_outputs = layer_module( 2025-08-14T21:53:40.3989943Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:53:40.3990419Z self_attention_outputs = self.attention( 2025-08-14T21:53:40.3990894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:53:40.3991420Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:53:40.3991938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:53:40.3992452Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.3992974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.3993469Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.3993633Z 2025-08-14T21:53:40.3993732Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.3993989Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.3994394Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.3994756Z return mod(**inputs) 2025-08-14T21:53:40.3995191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.3995818Z outputs = self.mobilebert( 2025-08-14T21:53:40.3996454Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.3997032Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.3997488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.3997949Z layer_outputs = layer_module( 2025-08-14T21:53:40.3998575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.3999315Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.3999812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4000312Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4000907Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4001414Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4001611Z 2025-08-14T21:53:40.4001699Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4002026Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4002588Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4003008Z return mod(**inputs) 2025-08-14T21:53:40.4003632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4004301Z outputs = self.mobilebert( 2025-08-14T21:53:40.4004889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4005610Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4006195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4006744Z layer_outputs = layer_module( 2025-08-14T21:53:40.4007344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4007939Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4008642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4009425Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4010087Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4010628Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4011257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4011946Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4012168Z 2025-08-14T21:53:40.4012256Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4012602Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4013136Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4013645Z return mod(**inputs) 2025-08-14T21:53:40.4014238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4014795Z outputs = self.mobilebert( 2025-08-14T21:53:40.4015459Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4015924Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4016517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4017037Z layer_outputs = layer_module( 2025-08-14T21:53:40.4017649Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4018139Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4018702Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4019285Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4019930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4020529Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4020721Z 2025-08-14T21:53:40.4020854Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4021165Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4021560Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4021916Z return mod(**inputs) 2025-08-14T21:53:40.4022348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4022789Z outputs = self.mobilebert( 2025-08-14T21:53:40.4023319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4023917Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4024411Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4024927Z layer_outputs = layer_module( 2025-08-14T21:53:40.4025457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4026129Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4026621Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4027303Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4027840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4028346Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4028850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4029335Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4029511Z 2025-08-14T21:53:40.4029599Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4029858Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4030304Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4030725Z return mod(**inputs) 2025-08-14T21:53:40.4031290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4031727Z outputs = self.mobilebert( 2025-08-14T21:53:40.4032163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4032618Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4033053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4033647Z layer_outputs = layer_module( 2025-08-14T21:53:40.4034148Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4034631Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4035105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4035592Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4036194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4036741Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4036961Z 2025-08-14T21:53:40.4037096Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4037373Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4037771Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4038225Z return mod(**inputs) 2025-08-14T21:53:40.4038643Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4039185Z outputs = self.mobilebert( 2025-08-14T21:53:40.4039701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4040186Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4040707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4041250Z layer_outputs = layer_module( 2025-08-14T21:53:40.4042005Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4042790Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4043324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4043866Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4044378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4044873Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4045388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4045997Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4046165Z 2025-08-14T21:53:40.4046264Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4046603Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4047163Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4047532Z return mod(**inputs) 2025-08-14T21:53:40.4047954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4048410Z outputs = self.mobilebert( 2025-08-14T21:53:40.4048836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4049276Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4049699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4050139Z layer_outputs = layer_module( 2025-08-14T21:53:40.4050571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:53:40.4051062Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:53:40.4051686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4052194Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4052370Z 2025-08-14T21:53:40.4052467Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4052797Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4053187Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4053637Z return mod(**inputs) 2025-08-14T21:53:40.4054165Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4054693Z outputs = self.mobilebert( 2025-08-14T21:53:40.4055154Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4055742Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4056394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4057051Z layer_outputs = layer_module( 2025-08-14T21:53:40.4057662Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.4058261Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.4058827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:53:40.4059502Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:53:40.4060420Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4061190Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4061349Z 2025-08-14T21:53:40.4061439Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4061833Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4062301Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4062843Z return mod(**inputs) 2025-08-14T21:53:40.4063353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4064041Z outputs = self.mobilebert( 2025-08-14T21:53:40.4064722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4065348Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4066002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4066608Z layer_outputs = layer_module( 2025-08-14T21:53:40.4067290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.4068137Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.4068919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:53:40.4069609Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:53:40.4070300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:53:40.4070797Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4071300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4071775Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4071935Z 2025-08-14T21:53:40.4072027Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4072289Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4072685Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4073040Z return mod(**inputs) 2025-08-14T21:53:40.4073460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4073910Z outputs = self.mobilebert( 2025-08-14T21:53:40.4074361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4074890Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4075345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4075817Z layer_outputs = layer_module( 2025-08-14T21:53:40.4076265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:53:40.4076807Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:53:40.4077356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:53:40.4077858Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:53:40.4078388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:53:40.4078851Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:53:40.4079434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4079927Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4080088Z 2025-08-14T21:53:40.4080208Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4080435Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4080670Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4080900Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4081119Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4081348Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4081577Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4081793Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4082017Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4082304Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4082580Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4082987Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4083358Z return mod(**inputs) 2025-08-14T21:53:40.4083794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4084418Z outputs = self.mobilebert( 2025-08-14T21:53:40.4084984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4085431Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4085892Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4086334Z layer_outputs = layer_module( 2025-08-14T21:53:40.4086783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:53:40.4087271Z self_attention_outputs = self.attention( 2025-08-14T21:53:40.4087953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:53:40.4088637Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:53:40.4089419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:53:40.4090218Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4090787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4091245Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4091407Z 2025-08-14T21:53:40.4091495Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4091745Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4092124Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4092471Z return mod(**inputs) 2025-08-14T21:53:40.4092883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4093529Z outputs = self.mobilebert( 2025-08-14T21:53:40.4093985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4094675Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4095201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4095847Z layer_outputs = layer_module( 2025-08-14T21:53:40.4096549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4097236Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4098054Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4098681Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4099452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4100270Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4100546Z 2025-08-14T21:53:40.4100665Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4101070Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4101599Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4101962Z return mod(**inputs) 2025-08-14T21:53:40.4102471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4103014Z outputs = self.mobilebert( 2025-08-14T21:53:40.4103614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4104320Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4104920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4105366Z layer_outputs = layer_module( 2025-08-14T21:53:40.4105894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4106362Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4107082Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4107732Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4108236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4108810Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4109315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4109797Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4109961Z 2025-08-14T21:53:40.4110090Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4110440Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4110982Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4111490Z return mod(**inputs) 2025-08-14T21:53:40.4112037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4112774Z outputs = self.mobilebert( 2025-08-14T21:53:40.4113401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4113855Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4114297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4114749Z layer_outputs = layer_module( 2025-08-14T21:53:40.4115199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4115673Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4116313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4116952Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4117518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4118023Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4118215Z 2025-08-14T21:53:40.4118304Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4118568Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4118972Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4119331Z return mod(**inputs) 2025-08-14T21:53:40.4119771Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4120369Z outputs = self.mobilebert( 2025-08-14T21:53:40.4120940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4121393Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4121830Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4122269Z layer_outputs = layer_module( 2025-08-14T21:53:40.4122693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4123177Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4123744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4124243Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4124819Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4125334Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4126058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4126638Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4126879Z 2025-08-14T21:53:40.4126986Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4127247Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4127726Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4128152Z return mod(**inputs) 2025-08-14T21:53:40.4128587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4129040Z outputs = self.mobilebert( 2025-08-14T21:53:40.4129543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4129983Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4130521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4130980Z layer_outputs = layer_module( 2025-08-14T21:53:40.4131494Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4132006Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4132509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4133154Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4133728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4134326Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4134628Z 2025-08-14T21:53:40.4134756Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4135054Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4135526Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4136060Z return mod(**inputs) 2025-08-14T21:53:40.4136608Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4137157Z outputs = self.mobilebert( 2025-08-14T21:53:40.4137813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4138503Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4138949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4139490Z layer_outputs = layer_module( 2025-08-14T21:53:40.4140052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4140533Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4141116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4141627Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4142429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4143046Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4143596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4144338Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4144587Z 2025-08-14T21:53:40.4144708Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4145018Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4145520Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4146028Z return mod(**inputs) 2025-08-14T21:53:40.4146644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4147167Z outputs = self.mobilebert( 2025-08-14T21:53:40.4147838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4148365Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4149042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4149588Z layer_outputs = layer_module( 2025-08-14T21:53:40.4150121Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:53:40.4150859Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:53:40.4151538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4152250Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4152438Z 2025-08-14T21:53:40.4152528Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4152789Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4153342Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4153778Z return mod(**inputs) 2025-08-14T21:53:40.4154370Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4154979Z outputs = self.mobilebert( 2025-08-14T21:53:40.4155635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4156297Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4156923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4157533Z layer_outputs = layer_module( 2025-08-14T21:53:40.4158212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.4159067Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.4159861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:53:40.4160583Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:53:40.4161172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4161789Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4161952Z 2025-08-14T21:53:40.4162050Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4162303Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4162720Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4163066Z return mod(**inputs) 2025-08-14T21:53:40.4163483Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4163916Z outputs = self.mobilebert( 2025-08-14T21:53:40.4164369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4164833Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4165286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4165752Z layer_outputs = layer_module( 2025-08-14T21:53:40.4166208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.4166764Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.4167300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:53:40.4167794Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:53:40.4168325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:53:40.4168842Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4169349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4169835Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4169995Z 2025-08-14T21:53:40.4170093Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4170359Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4170738Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4171108Z return mod(**inputs) 2025-08-14T21:53:40.4171579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4172129Z outputs = self.mobilebert( 2025-08-14T21:53:40.4172630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4173114Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4173590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4174050Z layer_outputs = layer_module( 2025-08-14T21:53:40.4174505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:53:40.4175049Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:53:40.4175592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:53:40.4176083Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:53:40.4176568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:53:40.4177036Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:53:40.4177489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4177954Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4178122Z 2025-08-14T21:53:40.4178209Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4178436Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4178653Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4178877Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4179097Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4179314Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4179597Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4179840Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4180058Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4180288Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4180545Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4180942Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4181291Z return mod(**inputs) 2025-08-14T21:53:40.4181711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4182159Z outputs = self.mobilebert( 2025-08-14T21:53:40.4182588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4183041Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4183486Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4183934Z layer_outputs = layer_module( 2025-08-14T21:53:40.4184366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:53:40.4184832Z self_attention_outputs = self.attention( 2025-08-14T21:53:40.4185290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:53:40.4185804Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:53:40.4186304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:53:40.4186792Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4187307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4187769Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4187980Z 2025-08-14T21:53:40.4188064Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4188318Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4188727Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4189069Z return mod(**inputs) 2025-08-14T21:53:40.4189491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4189942Z outputs = self.mobilebert( 2025-08-14T21:53:40.4190372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4190829Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4191264Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4191715Z layer_outputs = layer_module( 2025-08-14T21:53:40.4192138Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4192613Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4193079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4193566Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4194041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4194518Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4194693Z 2025-08-14T21:53:40.4194786Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4195032Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4195412Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4195763Z return mod(**inputs) 2025-08-14T21:53:40.4196174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4196602Z outputs = self.mobilebert( 2025-08-14T21:53:40.4197025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4197465Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4197895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4198322Z layer_outputs = layer_module( 2025-08-14T21:53:40.4198752Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4199213Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4199668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4200164Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4200653Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4201138Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4201617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4202074Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4202259Z 2025-08-14T21:53:40.4202355Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4202627Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4203007Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4203373Z return mod(**inputs) 2025-08-14T21:53:40.4203803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4204233Z outputs = self.mobilebert( 2025-08-14T21:53:40.4204660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4205098Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4205531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4205959Z layer_outputs = layer_module( 2025-08-14T21:53:40.4206397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4206867Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4207317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4207812Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4208272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4208722Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4208887Z 2025-08-14T21:53:40.4208968Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4209209Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4209571Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4209903Z return mod(**inputs) 2025-08-14T21:53:40.4210285Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4210697Z outputs = self.mobilebert( 2025-08-14T21:53:40.4211099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4211507Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4211917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4212333Z layer_outputs = layer_module( 2025-08-14T21:53:40.4212738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4213164Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4213602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4214064Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4214528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4214981Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4215441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4215870Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4216019Z 2025-08-14T21:53:40.4216106Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4216338Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4216718Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4217042Z return mod(**inputs) 2025-08-14T21:53:40.4217443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4217869Z outputs = self.mobilebert( 2025-08-14T21:53:40.4218285Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4218701Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4219110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4219651Z layer_outputs = layer_module( 2025-08-14T21:53:40.4220116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4220581Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4221059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4221540Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4222033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4222517Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4222707Z 2025-08-14T21:53:40.4222798Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4223071Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4223459Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4223795Z return mod(**inputs) 2025-08-14T21:53:40.4224212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4224645Z outputs = self.mobilebert( 2025-08-14T21:53:40.4225062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4225504Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4225950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4226380Z layer_outputs = layer_module( 2025-08-14T21:53:40.4226802Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4227259Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4227716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4228201Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4228680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4229163Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4229658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4230117Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4230281Z 2025-08-14T21:53:40.4230367Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4230622Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4231004Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4231340Z return mod(**inputs) 2025-08-14T21:53:40.4231797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4232240Z outputs = self.mobilebert( 2025-08-14T21:53:40.4232676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4233131Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4233600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4234043Z layer_outputs = layer_module( 2025-08-14T21:53:40.4234472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:53:40.4234963Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:53:40.4235461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4235954Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4236130Z 2025-08-14T21:53:40.4236217Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4236472Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4236861Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4237206Z return mod(**inputs) 2025-08-14T21:53:40.4237622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4238062Z outputs = self.mobilebert( 2025-08-14T21:53:40.4238492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4238924Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4239360Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4239805Z layer_outputs = layer_module( 2025-08-14T21:53:40.4240235Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.4240773Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.4241316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:53:40.4241962Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:53:40.4242459Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4242924Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4243091Z 2025-08-14T21:53:40.4243178Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4243434Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4243816Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4244171Z return mod(**inputs) 2025-08-14T21:53:40.4244588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4245034Z outputs = self.mobilebert( 2025-08-14T21:53:40.4245450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4245872Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4246281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4246689Z layer_outputs = layer_module( 2025-08-14T21:53:40.4247110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.4247714Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.4248272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:53:40.4248791Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:53:40.4249312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:53:40.4249806Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4250293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4250750Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4250914Z 2025-08-14T21:53:40.4250999Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4251251Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4251624Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4251975Z return mod(**inputs) 2025-08-14T21:53:40.4252363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4252769Z outputs = self.mobilebert( 2025-08-14T21:53:40.4253184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4253617Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4254042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4254477Z layer_outputs = layer_module( 2025-08-14T21:53:40.4254914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:53:40.4255442Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:53:40.4255975Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:53:40.4256438Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:53:40.4256920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:53:40.4257377Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:53:40.4257822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4258267Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4258427Z 2025-08-14T21:53:40.4258514Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4258739Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4258953Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4259173Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4259387Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4259714Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4259933Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4260154Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4260378Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4260591Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4260844Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4261237Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4261580Z return mod(**inputs) 2025-08-14T21:53:40.4262002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4262473Z outputs = self.mobilebert( 2025-08-14T21:53:40.4262903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4263373Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4263827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4264277Z layer_outputs = layer_module( 2025-08-14T21:53:40.4264704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:53:40.4265144Z self_attention_outputs = self.attention( 2025-08-14T21:53:40.4265590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:53:40.4266078Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:53:40.4266557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:53:40.4267046Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4267531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4267997Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4268154Z 2025-08-14T21:53:40.4268239Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4268490Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4268872Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4269223Z return mod(**inputs) 2025-08-14T21:53:40.4269634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4270073Z outputs = self.mobilebert( 2025-08-14T21:53:40.4270498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4270935Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4271361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4271802Z layer_outputs = layer_module( 2025-08-14T21:53:40.4272235Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4272680Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4273114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4273563Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4274010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4274448Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4274621Z 2025-08-14T21:53:40.4274702Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4274939Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4275290Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4275619Z return mod(**inputs) 2025-08-14T21:53:40.4276009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4276421Z outputs = self.mobilebert( 2025-08-14T21:53:40.4276835Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4277311Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4277720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4278152Z layer_outputs = layer_module( 2025-08-14T21:53:40.4278578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4279045Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4279515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4279971Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4280443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4280933Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4281422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4281851Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4282011Z 2025-08-14T21:53:40.4282092Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4282331Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4282686Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4283015Z return mod(**inputs) 2025-08-14T21:53:40.4283404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4283817Z outputs = self.mobilebert( 2025-08-14T21:53:40.4284214Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4284631Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4285039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4285450Z layer_outputs = layer_module( 2025-08-14T21:53:40.4285850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4286283Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4286717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4287161Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4287613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4288063Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4288230Z 2025-08-14T21:53:40.4288318Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4289432Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4289796Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4290127Z return mod(**inputs) 2025-08-14T21:53:40.4290525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4290933Z outputs = self.mobilebert( 2025-08-14T21:53:40.4291342Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4291762Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4292161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4292606Z layer_outputs = layer_module( 2025-08-14T21:53:40.4293019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4293470Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4293913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4294394Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4294856Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4295311Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4295765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4296198Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4296348Z 2025-08-14T21:53:40.4296438Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4296672Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4297032Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4297362Z return mod(**inputs) 2025-08-14T21:53:40.4297756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4298159Z outputs = self.mobilebert( 2025-08-14T21:53:40.4298566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4298981Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4299383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4299966Z layer_outputs = layer_module( 2025-08-14T21:53:40.4300484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4300964Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4301445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4301991Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4302500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4303005Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4303187Z 2025-08-14T21:53:40.4303276Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4303539Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4303938Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4304299Z return mod(**inputs) 2025-08-14T21:53:40.4304729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4305193Z outputs = self.mobilebert( 2025-08-14T21:53:40.4305645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4306119Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4306573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4307033Z layer_outputs = layer_module( 2025-08-14T21:53:40.4307529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4308050Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4308505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4309009Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4309486Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4309972Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4310431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4310862Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4311014Z 2025-08-14T21:53:40.4311098Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4311388Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4311760Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4312089Z return mod(**inputs) 2025-08-14T21:53:40.4312470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4312880Z outputs = self.mobilebert( 2025-08-14T21:53:40.4313284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4313696Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4314105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4314514Z layer_outputs = layer_module( 2025-08-14T21:53:40.4314919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:53:40.4315373Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:53:40.4315839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4316290Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4316456Z 2025-08-14T21:53:40.4316547Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4316781Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4317144Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4317478Z return mod(**inputs) 2025-08-14T21:53:40.4317861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4318275Z outputs = self.mobilebert( 2025-08-14T21:53:40.4318678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4319095Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4319493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4319907Z layer_outputs = layer_module( 2025-08-14T21:53:40.4320311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.4320797Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.4321302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:53:40.4321750Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:53:40.4322195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4322641Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4322812Z 2025-08-14T21:53:40.4322894Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4323128Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4323503Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4323818Z return mod(**inputs) 2025-08-14T21:53:40.4324219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4328163Z outputs = self.mobilebert( 2025-08-14T21:53:40.4328566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4328985Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4329401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4329798Z layer_outputs = layer_module( 2025-08-14T21:53:40.4330199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.4330698Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.4331198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:53:40.4331650Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:53:40.4332146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:53:40.4332601Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4333067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4333505Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4333658Z 2025-08-14T21:53:40.4333765Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4334006Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4334376Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4334712Z return mod(**inputs) 2025-08-14T21:53:40.4335100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4335521Z outputs = self.mobilebert( 2025-08-14T21:53:40.4335926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4336349Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4336751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4337164Z layer_outputs = layer_module( 2025-08-14T21:53:40.4337576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:53:40.4338078Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:53:40.4338591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:53:40.4339049Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:53:40.4339504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:53:40.4340159Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:53:40.4340709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4341215Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4341377Z 2025-08-14T21:53:40.4341474Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4341734Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4342200Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4342429Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4342693Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4342928Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4343157Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4343465Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4343687Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4343910Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4344173Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4344538Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4344869Z return mod(**inputs) 2025-08-14T21:53:40.4345262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4345673Z outputs = self.mobilebert( 2025-08-14T21:53:40.4346082Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4346502Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4346910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4347345Z layer_outputs = layer_module( 2025-08-14T21:53:40.4347783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:53:40.4348244Z self_attention_outputs = self.attention( 2025-08-14T21:53:40.4348680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:53:40.4349151Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:53:40.4349612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:53:40.4350074Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4350525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4350961Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4351117Z 2025-08-14T21:53:40.4351197Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4351435Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4351790Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4352120Z return mod(**inputs) 2025-08-14T21:53:40.4352511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4352919Z outputs = self.mobilebert( 2025-08-14T21:53:40.4353388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4353808Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4354218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4354626Z layer_outputs = layer_module( 2025-08-14T21:53:40.4355035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4355472Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4355957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4356413Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4356893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4357377Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4357546Z 2025-08-14T21:53:40.4357627Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4357869Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4358289Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4358621Z return mod(**inputs) 2025-08-14T21:53:40.4359010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4359427Z outputs = self.mobilebert( 2025-08-14T21:53:40.4359829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4360236Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4360648Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4361073Z layer_outputs = layer_module( 2025-08-14T21:53:40.4361482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4361912Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4362349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4362812Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4363275Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4363724Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4364186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4364620Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4364768Z 2025-08-14T21:53:40.4364855Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4365089Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4365450Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4365781Z return mod(**inputs) 2025-08-14T21:53:40.4366167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4366582Z outputs = self.mobilebert( 2025-08-14T21:53:40.4366980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4367399Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4367797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4368212Z layer_outputs = layer_module( 2025-08-14T21:53:40.4368617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4369044Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4369480Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4369936Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4370454Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4370895Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4371088Z 2025-08-14T21:53:40.4371168Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4371404Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4371779Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4372101Z return mod(**inputs) 2025-08-14T21:53:40.4372513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4372921Z outputs = self.mobilebert( 2025-08-14T21:53:40.4373310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4373722Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4374127Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4374538Z layer_outputs = layer_module( 2025-08-14T21:53:40.4374933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4375376Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4375835Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4376337Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4376826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4377319Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4377809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4378267Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4378432Z 2025-08-14T21:53:40.4378517Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4378785Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4379170Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4379587Z return mod(**inputs) 2025-08-14T21:53:40.4380029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4380493Z outputs = self.mobilebert( 2025-08-14T21:53:40.4380938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4381408Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4381831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4382283Z layer_outputs = layer_module( 2025-08-14T21:53:40.4382720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4383183Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4383635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4384119Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4384593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4385092Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4385276Z 2025-08-14T21:53:40.4385361Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4385618Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4385997Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4386352Z return mod(**inputs) 2025-08-14T21:53:40.4386778Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4387213Z outputs = self.mobilebert( 2025-08-14T21:53:40.4387623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4388079Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4388505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4388916Z layer_outputs = layer_module( 2025-08-14T21:53:40.4389307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4389738Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4390167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4390626Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4391079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4391538Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4392010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4392443Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4392590Z 2025-08-14T21:53:40.4392670Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4392911Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4393274Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4393595Z return mod(**inputs) 2025-08-14T21:53:40.4393988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4394398Z outputs = self.mobilebert( 2025-08-14T21:53:40.4394800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4395211Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4395617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4396030Z layer_outputs = layer_module( 2025-08-14T21:53:40.4396425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:53:40.4396888Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:53:40.4397345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4397805Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4397969Z 2025-08-14T21:53:40.4398048Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4398289Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4398655Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4398983Z return mod(**inputs) 2025-08-14T21:53:40.4399392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4399806Z outputs = self.mobilebert( 2025-08-14T21:53:40.4400207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4400634Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4401063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4401474Z layer_outputs = layer_module( 2025-08-14T21:53:40.4401889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.4402398Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.4402900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:53:40.4403362Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:53:40.4403864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4404288Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4404443Z 2025-08-14T21:53:40.4404522Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4404763Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4405117Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4405447Z return mod(**inputs) 2025-08-14T21:53:40.4405829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4406236Z outputs = self.mobilebert( 2025-08-14T21:53:40.4406632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4407046Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4407451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4407859Z layer_outputs = layer_module( 2025-08-14T21:53:40.4408301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.4408799Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.4409296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:53:40.4409750Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:53:40.4410210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:53:40.4410666Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4411124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4411546Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4411701Z 2025-08-14T21:53:40.4411785Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4412022Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4412381Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4412707Z return mod(**inputs) 2025-08-14T21:53:40.4413098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4413510Z outputs = self.mobilebert( 2025-08-14T21:53:40.4413926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4414353Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4414764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4415191Z layer_outputs = layer_module( 2025-08-14T21:53:40.4415621Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:53:40.4416129Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:53:40.4416683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:53:40.4417170Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:53:40.4417637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:53:40.4418061Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:53:40.4418485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4418919Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4419081Z 2025-08-14T21:53:40.4419167Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4419393Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4419706Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4419928Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4420161Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4420388Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4420606Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4420835Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4421063Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4421293Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4421544Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4421769Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4421844Z return mod(**inputs) 2025-08-14T21:53:40.4422144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4422230Z outputs = self.mobilebert( 2025-08-14T21:53:40.4422521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4422609Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4422893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4422965Z layer_outputs = layer_module( 2025-08-14T21:53:40.4423249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:53:40.4423336Z self_attention_outputs = self.attention( 2025-08-14T21:53:40.4423609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:53:40.4423742Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:53:40.4424022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:53:40.4424148Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4424430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4424521Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4424554Z 2025-08-14T21:53:40.4424641Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4424745Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4424944Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4425036Z return mod(**inputs) 2025-08-14T21:53:40.4425334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4425415Z outputs = self.mobilebert( 2025-08-14T21:53:40.4425689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4425784Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4426066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4426138Z layer_outputs = layer_module( 2025-08-14T21:53:40.4426417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4426520Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4426796Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4426915Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4427189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4427301Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4427304Z 2025-08-14T21:53:40.4427390Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4427492Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4427698Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4427765Z return mod(**inputs) 2025-08-14T21:53:40.4428041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4428121Z outputs = self.mobilebert( 2025-08-14T21:53:40.4428397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4428471Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4428755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4428827Z layer_outputs = layer_module( 2025-08-14T21:53:40.4429109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4429203Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4429479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4429610Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4429885Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4430018Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4430293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4430383Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4430387Z 2025-08-14T21:53:40.4430472Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4430572Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4430788Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4430863Z return mod(**inputs) 2025-08-14T21:53:40.4431140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4431238Z outputs = self.mobilebert( 2025-08-14T21:53:40.4431538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4431613Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4431899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4431989Z layer_outputs = layer_module( 2025-08-14T21:53:40.4432269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4432364Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4432637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4432753Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4433031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4433142Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4433154Z 2025-08-14T21:53:40.4433232Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4433335Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4433539Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4433605Z return mod(**inputs) 2025-08-14T21:53:40.4433883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4433963Z outputs = self.mobilebert( 2025-08-14T21:53:40.4434238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4434320Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4434599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4434671Z layer_outputs = layer_module( 2025-08-14T21:53:40.4434951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4435046Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4435318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4435450Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4435722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4435851Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4436144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4436243Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4436248Z 2025-08-14T21:53:40.4436338Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4436449Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4436666Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4436734Z return mod(**inputs) 2025-08-14T21:53:40.4437060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4437145Z outputs = self.mobilebert( 2025-08-14T21:53:40.4437443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4437540Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4437855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4437933Z layer_outputs = layer_module( 2025-08-14T21:53:40.4438236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4438357Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4438649Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4438774Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4439066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4439186Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4439192Z 2025-08-14T21:53:40.4439274Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4439383Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4439599Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4439668Z return mod(**inputs) 2025-08-14T21:53:40.4439972Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4440055Z outputs = self.mobilebert( 2025-08-14T21:53:40.4440359Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4440443Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4440736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4440813Z layer_outputs = layer_module( 2025-08-14T21:53:40.4441114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4441211Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4441523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4441655Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4442102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4442244Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4442541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4442639Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4442654Z 2025-08-14T21:53:40.4442736Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4442847Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4443066Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4443135Z return mod(**inputs) 2025-08-14T21:53:40.4443438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4443523Z outputs = self.mobilebert( 2025-08-14T21:53:40.4443898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4443983Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4444275Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4444391Z layer_outputs = layer_module( 2025-08-14T21:53:40.4444741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:53:40.4444872Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:53:40.4445169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4445331Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4445335Z 2025-08-14T21:53:40.4445419Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4445536Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4445745Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4445814Z return mod(**inputs) 2025-08-14T21:53:40.4446113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4446190Z outputs = self.mobilebert( 2025-08-14T21:53:40.4446496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4446573Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4446863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4446949Z layer_outputs = layer_module( 2025-08-14T21:53:40.4447242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.4447410Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.4447713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:53:40.4447843Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:53:40.4448142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4448239Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4448243Z 2025-08-14T21:53:40.4448329Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4448446Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4448653Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4448729Z return mod(**inputs) 2025-08-14T21:53:40.4449018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4449103Z outputs = self.mobilebert( 2025-08-14T21:53:40.4449401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4449478Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4449769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4449852Z layer_outputs = layer_module( 2025-08-14T21:53:40.4450144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.4450318Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.4450635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:53:40.4450766Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:53:40.4451070Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:53:40.4451218Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4451535Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4451633Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4451655Z 2025-08-14T21:53:40.4451740Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4451856Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4452064Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4452132Z return mod(**inputs) 2025-08-14T21:53:40.4452438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4452511Z outputs = self.mobilebert( 2025-08-14T21:53:40.4452807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4452885Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4453181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4453262Z layer_outputs = layer_module( 2025-08-14T21:53:40.4453555Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:53:40.4453736Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:53:40.4454033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:53:40.4454147Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:53:40.4454447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:53:40.4454540Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:53:40.4454839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4454934Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4454940Z 2025-08-14T21:53:40.4455025Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4455112Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4455192Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4455270Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4455357Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4455438Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4455518Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4455604Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4455681Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4455767Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4455874Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4456083Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4456158Z return mod(**inputs) 2025-08-14T21:53:40.4456460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4456535Z outputs = self.mobilebert( 2025-08-14T21:53:40.4456835Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4456932Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4457238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4457312Z layer_outputs = layer_module( 2025-08-14T21:53:40.4457637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:53:40.4457754Z self_attention_outputs = self.attention( 2025-08-14T21:53:40.4458051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:53:40.4458200Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:53:40.4458513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:53:40.4458646Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4458957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4459052Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4459058Z 2025-08-14T21:53:40.4459141Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4459261Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4459475Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4459608Z return mod(**inputs) 2025-08-14T21:53:40.4459923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4460002Z outputs = self.mobilebert( 2025-08-14T21:53:40.4460314Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4460394Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4460692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4460787Z layer_outputs = layer_module( 2025-08-14T21:53:40.4461065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4461167Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4461438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4461550Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4461837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4461946Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4461952Z 2025-08-14T21:53:40.4462038Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4462142Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4462337Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4462414Z return mod(**inputs) 2025-08-14T21:53:40.4462690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4462763Z outputs = self.mobilebert( 2025-08-14T21:53:40.4463045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4463121Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4463409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4463501Z layer_outputs = layer_module( 2025-08-14T21:53:40.4463779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4463878Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4465016Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4465174Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4465466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4465616Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4465915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4466011Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4466016Z 2025-08-14T21:53:40.4466106Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4466216Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4466425Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4466504Z return mod(**inputs) 2025-08-14T21:53:40.4466798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4466873Z outputs = self.mobilebert( 2025-08-14T21:53:40.4467178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4467256Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4467556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4467631Z layer_outputs = layer_module( 2025-08-14T21:53:40.4467921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4468027Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4468321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4468441Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4468743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4468861Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4468866Z 2025-08-14T21:53:40.4468958Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4469068Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4469279Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4469355Z return mod(**inputs) 2025-08-14T21:53:40.4469650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4469735Z outputs = self.mobilebert( 2025-08-14T21:53:40.4470030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4470108Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4470413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4470489Z layer_outputs = layer_module( 2025-08-14T21:53:40.4470781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4470929Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4471222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4471359Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4471666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4471815Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4472120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4472238Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4472242Z 2025-08-14T21:53:40.4472331Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4472438Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4472650Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4472730Z return mod(**inputs) 2025-08-14T21:53:40.4473018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4473093Z outputs = self.mobilebert( 2025-08-14T21:53:40.4473391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4473470Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4473770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4473845Z layer_outputs = layer_module( 2025-08-14T21:53:40.4474133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4474240Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4474530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4474653Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4474947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4475063Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4475067Z 2025-08-14T21:53:40.4475157Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4475265Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4475471Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4475547Z return mod(**inputs) 2025-08-14T21:53:40.4475849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4475932Z outputs = self.mobilebert( 2025-08-14T21:53:40.4476221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4476299Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4476601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4476674Z layer_outputs = layer_module( 2025-08-14T21:53:40.4476982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4477080Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4477382Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4477540Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4477833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4477958Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4478291Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4478388Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4478392Z 2025-08-14T21:53:40.4478483Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4478614Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4478825Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4478903Z return mod(**inputs) 2025-08-14T21:53:40.4479200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4479282Z outputs = self.mobilebert( 2025-08-14T21:53:40.4479576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4479655Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4479958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4480032Z layer_outputs = layer_module( 2025-08-14T21:53:40.4480323Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:53:40.4480461Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:53:40.4480754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4480879Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4480883Z 2025-08-14T21:53:40.4480966Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4481074Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4481300Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4481370Z return mod(**inputs) 2025-08-14T21:53:40.4481683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4481757Z outputs = self.mobilebert( 2025-08-14T21:53:40.4482061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4482149Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4482444Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4482519Z layer_outputs = layer_module( 2025-08-14T21:53:40.4482822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.4482994Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.4483299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:53:40.4483425Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:53:40.4483721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4483824Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4483827Z 2025-08-14T21:53:40.4483910Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4484044Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4484254Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4484324Z return mod(**inputs) 2025-08-14T21:53:40.4484634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4484729Z outputs = self.mobilebert( 2025-08-14T21:53:40.4485034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4485121Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4485432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4485514Z layer_outputs = layer_module( 2025-08-14T21:53:40.4485819Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.4485986Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.4486290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:53:40.4486420Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:53:40.4486724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:53:40.4486852Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4487149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4487252Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4487256Z 2025-08-14T21:53:40.4487341Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4487458Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4487670Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4487741Z return mod(**inputs) 2025-08-14T21:53:40.4488044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4488120Z outputs = self.mobilebert( 2025-08-14T21:53:40.4488417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4488505Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4488800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4488881Z layer_outputs = layer_module( 2025-08-14T21:53:40.4489178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:53:40.4489349Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:53:40.4489650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:53:40.4489767Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:53:40.4490071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:53:40.4490164Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:53:40.4490461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4490562Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4490565Z 2025-08-14T21:53:40.4490648Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4490750Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4490841Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4490920Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4491005Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4491101Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4491181Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4491282Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4491361Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4491439Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4491554Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4491826Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4491894Z return mod(**inputs) 2025-08-14T21:53:40.4492201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4492277Z outputs = self.mobilebert( 2025-08-14T21:53:40.4492583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4492662Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4492975Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4493057Z layer_outputs = layer_module( 2025-08-14T21:53:40.4493363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:53:40.4493456Z self_attention_outputs = self.attention( 2025-08-14T21:53:40.4493759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:53:40.4493890Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:53:40.4494194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:53:40.4494325Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4494620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4494724Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4494728Z 2025-08-14T21:53:40.4494808Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4494926Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4495137Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4495207Z return mod(**inputs) 2025-08-14T21:53:40.4495512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4495588Z outputs = self.mobilebert( 2025-08-14T21:53:40.4495881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4495967Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4496262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4496344Z layer_outputs = layer_module( 2025-08-14T21:53:40.4496650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4496752Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4497052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4497172Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4497507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4497629Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4497651Z 2025-08-14T21:53:40.4497737Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4497856Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4498091Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4498164Z return mod(**inputs) 2025-08-14T21:53:40.4498524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4498598Z outputs = self.mobilebert( 2025-08-14T21:53:40.4498911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4498990Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4499290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4499374Z layer_outputs = layer_module( 2025-08-14T21:53:40.4499897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4500017Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4500320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4500457Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4500767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4500898Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4501218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4501316Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4501322Z 2025-08-14T21:53:40.4501408Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4501527Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4501743Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4501814Z return mod(**inputs) 2025-08-14T21:53:40.4502128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4502204Z outputs = self.mobilebert( 2025-08-14T21:53:40.4502523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4502603Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4502906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4502991Z layer_outputs = layer_module( 2025-08-14T21:53:40.4503293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4503395Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4503714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4503837Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4504144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4504294Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4504299Z 2025-08-14T21:53:40.4504385Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4504506Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4504721Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4504818Z return mod(**inputs) 2025-08-14T21:53:40.4505139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4505219Z outputs = self.mobilebert( 2025-08-14T21:53:40.4505669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4505782Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4506094Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4506183Z layer_outputs = layer_module( 2025-08-14T21:53:40.4506489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4506599Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4506911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4507044Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4507368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4507499Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4507813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4507915Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4507920Z 2025-08-14T21:53:40.4508006Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4508128Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4508343Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4508417Z return mod(**inputs) 2025-08-14T21:53:40.4508749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4508825Z outputs = self.mobilebert( 2025-08-14T21:53:40.4509144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4509216Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4509499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4509579Z layer_outputs = layer_module( 2025-08-14T21:53:40.4509863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4509961Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4510243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4510353Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4510645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4510755Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4510759Z 2025-08-14T21:53:40.4510845Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4510947Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4511168Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4511243Z return mod(**inputs) 2025-08-14T21:53:40.4511517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4511605Z outputs = self.mobilebert( 2025-08-14T21:53:40.4511907Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4511981Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4512262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4512352Z layer_outputs = layer_module( 2025-08-14T21:53:40.4512627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4512730Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4513009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4513134Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4513424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4513545Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4513833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4513926Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4513930Z 2025-08-14T21:53:40.4514009Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4514121Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4514321Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4514394Z return mod(**inputs) 2025-08-14T21:53:40.4514672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4514744Z outputs = self.mobilebert( 2025-08-14T21:53:40.4515036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4515110Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4515391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4515471Z layer_outputs = layer_module( 2025-08-14T21:53:40.4515752Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:53:40.4515888Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:53:40.4516167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4516276Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4516281Z 2025-08-14T21:53:40.4516380Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4516484Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4516691Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4516759Z return mod(**inputs) 2025-08-14T21:53:40.4517036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4517113Z outputs = self.mobilebert( 2025-08-14T21:53:40.4517417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4517493Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4517783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4517873Z layer_outputs = layer_module( 2025-08-14T21:53:40.4518185Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.4518345Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.4518634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:53:40.4518764Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:53:40.4519037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4519135Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4519139Z 2025-08-14T21:53:40.4519216Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4519317Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4519526Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4519592Z return mod(**inputs) 2025-08-14T21:53:40.4519872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4519952Z outputs = self.mobilebert( 2025-08-14T21:53:40.4520222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4520303Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4520584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4520654Z layer_outputs = layer_module( 2025-08-14T21:53:40.4520937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.4521094Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.4521383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:53:40.4521503Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:53:40.4521779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:53:40.4521904Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4522179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4522276Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4522279Z 2025-08-14T21:53:40.4522357Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4522461Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4522665Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4522730Z return mod(**inputs) 2025-08-14T21:53:40.4523006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4523088Z outputs = self.mobilebert( 2025-08-14T21:53:40.4523359Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4523436Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4523726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4523800Z layer_outputs = layer_module( 2025-08-14T21:53:40.4524081Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:53:40.4524261Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:53:40.4524560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:53:40.4524671Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:53:40.4524968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:53:40.4525063Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:53:40.4525342Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4525431Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4525441Z 2025-08-14T21:53:40.4525521Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4525600Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4525682Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4525757Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4525833Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4525915Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4525991Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4526067Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4526151Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4526229Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4526331Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4526538Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4526604Z return mod(**inputs) 2025-08-14T21:53:40.4526888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4526960Z outputs = self.mobilebert( 2025-08-14T21:53:40.4527239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4527323Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4527601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4527685Z layer_outputs = layer_module( 2025-08-14T21:53:40.4527966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:53:40.4528056Z self_attention_outputs = self.attention( 2025-08-14T21:53:40.4528344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:53:40.4528470Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:53:40.4528748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:53:40.4528882Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4529158Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4529258Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4529262Z 2025-08-14T21:53:40.4529340Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4529443Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4529669Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4529735Z return mod(**inputs) 2025-08-14T21:53:40.4530021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4530110Z outputs = self.mobilebert( 2025-08-14T21:53:40.4530399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4530481Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4530755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4530843Z layer_outputs = layer_module( 2025-08-14T21:53:40.4531128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4531222Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4531506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4531621Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4531900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4532016Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4532019Z 2025-08-14T21:53:40.4532096Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4532208Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4532414Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4532482Z return mod(**inputs) 2025-08-14T21:53:40.4532786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4532860Z outputs = self.mobilebert( 2025-08-14T21:53:40.4533162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4533249Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4533558Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4533635Z layer_outputs = layer_module( 2025-08-14T21:53:40.4533912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4534007Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4534294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4534420Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4534704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4534826Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4535103Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4535204Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4535207Z 2025-08-14T21:53:40.4535289Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4535401Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4535609Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4535673Z return mod(**inputs) 2025-08-14T21:53:40.4535990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4536062Z outputs = self.mobilebert( 2025-08-14T21:53:40.4536343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4536445Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4536737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4536817Z layer_outputs = layer_module( 2025-08-14T21:53:40.4537098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4537208Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4537493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4537605Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4537881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4537996Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4538002Z 2025-08-14T21:53:40.4538079Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4538190Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4538389Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4538452Z return mod(**inputs) 2025-08-14T21:53:40.4538740Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4538810Z outputs = self.mobilebert( 2025-08-14T21:53:40.4539087Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4539166Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4539440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4539590Z layer_outputs = layer_module( 2025-08-14T21:53:40.4539880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4539973Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4540283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4540417Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4540793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4540920Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4541209Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4541314Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4541320Z 2025-08-14T21:53:40.4541402Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4541519Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4541729Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4542015Z return mod(**inputs) 2025-08-14T21:53:40.4542336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4542412Z outputs = self.mobilebert( 2025-08-14T21:53:40.4542733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4542884Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4543188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4543313Z layer_outputs = layer_module( 2025-08-14T21:53:40.4543627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4543728Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4544029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4544178Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4544493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4544609Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4544613Z 2025-08-14T21:53:40.4544697Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4544812Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4545020Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4545091Z return mod(**inputs) 2025-08-14T21:53:40.4545393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4545467Z outputs = self.mobilebert( 2025-08-14T21:53:40.4545777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4545853Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4546155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4546240Z layer_outputs = layer_module( 2025-08-14T21:53:40.4546529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4546626Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4546931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4547062Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4547359Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4547485Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4547776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4547881Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4547885Z 2025-08-14T21:53:40.4547968Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4548081Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4548289Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4548360Z return mod(**inputs) 2025-08-14T21:53:40.4548658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4548732Z outputs = self.mobilebert( 2025-08-14T21:53:40.4549029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4549118Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4549417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4549521Z layer_outputs = layer_module( 2025-08-14T21:53:40.4549823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:53:40.4549978Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:53:40.4550316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4550438Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4550442Z 2025-08-14T21:53:40.4550537Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4550671Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4550894Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4550973Z return mod(**inputs) 2025-08-14T21:53:40.4551270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4551346Z outputs = self.mobilebert( 2025-08-14T21:53:40.4551648Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4551726Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4552026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4552101Z layer_outputs = layer_module( 2025-08-14T21:53:40.4552393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.4552572Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.4552867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:53:40.4553003Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:53:40.4553296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4553394Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4553398Z 2025-08-14T21:53:40.4553490Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4553600Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4553816Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4553890Z return mod(**inputs) 2025-08-14T21:53:40.4554184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4554266Z outputs = self.mobilebert( 2025-08-14T21:53:40.4554559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4554635Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4554936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4555010Z layer_outputs = layer_module( 2025-08-14T21:53:40.4555311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.4555477Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.4555770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:53:40.4555904Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:53:40.4556239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:53:40.4556376Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4556670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4556789Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4556793Z 2025-08-14T21:53:40.4556897Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4557008Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4557215Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4557308Z return mod(**inputs) 2025-08-14T21:53:40.4557605Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4557686Z outputs = self.mobilebert( 2025-08-14T21:53:40.4557983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4558060Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4558364Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4558441Z layer_outputs = layer_module( 2025-08-14T21:53:40.4558738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:53:40.4558917Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:53:40.4559227Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:53:40.4559350Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:53:40.4559655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:53:40.4559746Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:53:40.4560046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4560144Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4560148Z 2025-08-14T21:53:40.4560241Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4560322Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4560402Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4560491Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4560569Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4560646Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4560733Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4560811Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4560898Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4560976Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4561086Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4561301Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4561371Z return mod(**inputs) 2025-08-14T21:53:40.4561665Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4561748Z outputs = self.mobilebert( 2025-08-14T21:53:40.4562055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4562141Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4562443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4562537Z layer_outputs = layer_module( 2025-08-14T21:53:40.4562839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:53:40.4562931Z self_attention_outputs = self.attention( 2025-08-14T21:53:40.4563244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:53:40.4563401Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:53:40.4563696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:53:40.4563854Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4564149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4564245Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4564251Z 2025-08-14T21:53:40.4564340Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4564449Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4564664Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4564735Z return mod(**inputs) 2025-08-14T21:53:40.4565029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4565109Z outputs = self.mobilebert( 2025-08-14T21:53:40.4565412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4565490Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4565801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4565878Z layer_outputs = layer_module( 2025-08-14T21:53:40.4566175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4566274Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4566569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4566696Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4566988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4567114Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4567118Z 2025-08-14T21:53:40.4567200Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4567307Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4567525Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4567595Z return mod(**inputs) 2025-08-14T21:53:40.4567886Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4567972Z outputs = self.mobilebert( 2025-08-14T21:53:40.4568266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4568349Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4568648Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4568724Z layer_outputs = layer_module( 2025-08-14T21:53:40.4569028Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4569160Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4569467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4569599Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4569915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4570070Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4570366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4570481Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4570493Z 2025-08-14T21:53:40.4570575Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4570683Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4570903Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4570972Z return mod(**inputs) 2025-08-14T21:53:40.4571263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4571348Z outputs = self.mobilebert( 2025-08-14T21:53:40.4571642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4571723Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4572016Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4572091Z layer_outputs = layer_module( 2025-08-14T21:53:40.4572390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4572490Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4572781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4572906Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4573200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4573324Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4573328Z 2025-08-14T21:53:40.4573410Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4573519Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4573735Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4573803Z return mod(**inputs) 2025-08-14T21:53:40.4574108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4574181Z outputs = self.mobilebert( 2025-08-14T21:53:40.4574475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4574559Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4574854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4574927Z layer_outputs = layer_module( 2025-08-14T21:53:40.4575230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4575329Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4575624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4575777Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4576069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4576204Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4576538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4576642Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4576646Z 2025-08-14T21:53:40.4576730Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4576858Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4577074Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4577142Z return mod(**inputs) 2025-08-14T21:53:40.4577435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4577520Z outputs = self.mobilebert( 2025-08-14T21:53:40.4577809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4577896Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4578187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4578262Z layer_outputs = layer_module( 2025-08-14T21:53:40.4578559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4578659Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4578969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4579088Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4579391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4579592Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4579602Z 2025-08-14T21:53:40.4579696Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4579809Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4580036Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4580107Z return mod(**inputs) 2025-08-14T21:53:40.4580419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4580498Z outputs = self.mobilebert( 2025-08-14T21:53:40.4580810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4580899Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4581212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4581289Z layer_outputs = layer_module( 2025-08-14T21:53:40.4581608Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4581708Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4582015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4582146Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4582437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4582597Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4582893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4582997Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4583021Z 2025-08-14T21:53:40.4583106Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4583231Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4583449Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4583519Z return mod(**inputs) 2025-08-14T21:53:40.4583840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4583914Z outputs = self.mobilebert( 2025-08-14T21:53:40.4584222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4584306Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4584598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4584675Z layer_outputs = layer_module( 2025-08-14T21:53:40.4584981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:53:40.4585109Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:53:40.4585417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4585535Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4585539Z 2025-08-14T21:53:40.4585621Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4585736Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4585944Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4586014Z return mod(**inputs) 2025-08-14T21:53:40.4586318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4586393Z outputs = self.mobilebert( 2025-08-14T21:53:40.4586693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4586767Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4587070Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4587154Z layer_outputs = layer_module( 2025-08-14T21:53:40.4587456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.4587632Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.4587924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:53:40.4588053Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:53:40.4588355Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4588450Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4588456Z 2025-08-14T21:53:40.4588548Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4588655Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4588862Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4588937Z return mod(**inputs) 2025-08-14T21:53:40.4589252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4589328Z outputs = self.mobilebert( 2025-08-14T21:53:40.4589631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4589726Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4590072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4590148Z layer_outputs = layer_module( 2025-08-14T21:53:40.4590468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.4590639Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.4590933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:53:40.4591067Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:53:40.4591357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:53:40.4591486Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4591788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4591884Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4591889Z 2025-08-14T21:53:40.4591976Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4592090Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4592298Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4592375Z return mod(**inputs) 2025-08-14T21:53:40.4592667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4592741Z outputs = self.mobilebert( 2025-08-14T21:53:40.4593041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4593119Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4593422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4593500Z layer_outputs = layer_module( 2025-08-14T21:53:40.4593790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:53:40.4593970Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:53:40.4594271Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:53:40.4594385Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:53:40.4594681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:53:40.4594777Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:53:40.4595080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4595174Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4595179Z 2025-08-14T21:53:40.4595261Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4595350Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4595431Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4595510Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4595622Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4595702Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4595789Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4595867Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4595946Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4596052Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4596160Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4596386Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4596466Z return mod(**inputs) 2025-08-14T21:53:40.4596788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4596871Z outputs = self.mobilebert( 2025-08-14T21:53:40.4597167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4597247Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4597546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4597621Z layer_outputs = layer_module( 2025-08-14T21:53:40.4597914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:53:40.4598017Z self_attention_outputs = self.attention( 2025-08-14T21:53:40.4598312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:53:40.4598452Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:53:40.4598748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:53:40.4598878Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4599179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4599274Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4599280Z 2025-08-14T21:53:40.4599368Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4599477Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4599689Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4599767Z return mod(**inputs) 2025-08-14T21:53:40.4600068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4600141Z outputs = self.mobilebert( 2025-08-14T21:53:40.4600451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4600530Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4600828Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4600902Z layer_outputs = layer_module( 2025-08-14T21:53:40.4601196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4601309Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4601602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4601731Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4602036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4602152Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4602178Z 2025-08-14T21:53:40.4602271Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4602380Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4602588Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4602683Z return mod(**inputs) 2025-08-14T21:53:40.4603028Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4603113Z outputs = self.mobilebert( 2025-08-14T21:53:40.4603410Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4603507Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4603806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4603882Z layer_outputs = layer_module( 2025-08-14T21:53:40.4604180Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4604281Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4604574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4604715Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4605012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4605144Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4605443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4605539Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4605543Z 2025-08-14T21:53:40.4605631Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4605740Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4605947Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4606026Z return mod(**inputs) 2025-08-14T21:53:40.4606319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4606400Z outputs = self.mobilebert( 2025-08-14T21:53:40.4606701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4606780Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4607077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4607153Z layer_outputs = layer_module( 2025-08-14T21:53:40.4607444Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4607551Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4607851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4607978Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4608293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4608409Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4608412Z 2025-08-14T21:53:40.4608502Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4608608Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4608845Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4608920Z return mod(**inputs) 2025-08-14T21:53:40.4609222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4609326Z outputs = self.mobilebert( 2025-08-14T21:53:40.4609644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4609726Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4610036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4610136Z layer_outputs = layer_module( 2025-08-14T21:53:40.4610440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4610541Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4610849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4610995Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4611306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4611445Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4611754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4611856Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4611860Z 2025-08-14T21:53:40.4611955Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4612067Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4612288Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4612367Z return mod(**inputs) 2025-08-14T21:53:40.4612676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4612765Z outputs = self.mobilebert( 2025-08-14T21:53:40.4613077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4613157Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4613473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4613551Z layer_outputs = layer_module( 2025-08-14T21:53:40.4613864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4613966Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4614272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4614399Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4614712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4614830Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4614844Z 2025-08-14T21:53:40.4614927Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4615041Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4615265Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4615334Z return mod(**inputs) 2025-08-14T21:53:40.4615666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4615756Z outputs = self.mobilebert( 2025-08-14T21:53:40.4616057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4616164Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4616492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4616571Z layer_outputs = layer_module( 2025-08-14T21:53:40.4616877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4616997Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4617302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4617449Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4617750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4617890Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4618196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4618296Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4618300Z 2025-08-14T21:53:40.4618395Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4618509Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4618734Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4618804Z return mod(**inputs) 2025-08-14T21:53:40.4619108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4619193Z outputs = self.mobilebert( 2025-08-14T21:53:40.4619494Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4619658Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4619980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4620056Z layer_outputs = layer_module( 2025-08-14T21:53:40.4620377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:53:40.4620513Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:53:40.4620820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4620952Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4620956Z 2025-08-14T21:53:40.4621041Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4621163Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4621386Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4621457Z return mod(**inputs) 2025-08-14T21:53:40.4621776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4621854Z outputs = self.mobilebert( 2025-08-14T21:53:40.4622160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4622249Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4622589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4622676Z layer_outputs = layer_module( 2025-08-14T21:53:40.4622977Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.4623177Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.4623505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:53:40.4623640Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:53:40.4623967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4624066Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4624070Z 2025-08-14T21:53:40.4624157Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4624275Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4624489Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4624559Z return mod(**inputs) 2025-08-14T21:53:40.4624877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4624956Z outputs = self.mobilebert( 2025-08-14T21:53:40.4625263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4625340Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4625649Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4625732Z layer_outputs = layer_module( 2025-08-14T21:53:40.4626043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.4626220Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.4626533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:53:40.4626665Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:53:40.4626971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:53:40.4627102Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4627412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4627517Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4627521Z 2025-08-14T21:53:40.4627603Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4627719Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4627927Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4627995Z return mod(**inputs) 2025-08-14T21:53:40.4628297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4628373Z outputs = self.mobilebert( 2025-08-14T21:53:40.4628688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4628767Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4629069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4629151Z layer_outputs = layer_module( 2025-08-14T21:53:40.4629478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:53:40.4629653Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:53:40.4629955Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:53:40.4630092Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:53:40.4630407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:53:40.4630521Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:53:40.4630816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4630917Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4630921Z 2025-08-14T21:53:40.4631004Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4631095Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4631174Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4631253Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4631340Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4631422Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4631499Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4631586Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4631665Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4631742Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4631860Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4632070Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4632147Z return mod(**inputs) 2025-08-14T21:53:40.4632445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4632520Z outputs = self.mobilebert( 2025-08-14T21:53:40.4632822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4632900Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4633193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4633277Z layer_outputs = layer_module( 2025-08-14T21:53:40.4633573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:53:40.4633674Z self_attention_outputs = self.attention( 2025-08-14T21:53:40.4633967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:53:40.4634099Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:53:40.4634400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:53:40.4634530Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4634831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4634927Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4634930Z 2025-08-14T21:53:40.4635011Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4635130Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4635339Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4635407Z return mod(**inputs) 2025-08-14T21:53:40.4635727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4635806Z outputs = self.mobilebert( 2025-08-14T21:53:40.4636108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4636205Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4636512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4636597Z layer_outputs = layer_module( 2025-08-14T21:53:40.4636893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4637018Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4637310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4637429Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4637734Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4637851Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4637856Z 2025-08-14T21:53:40.4637946Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4638054Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4638264Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4638338Z return mod(**inputs) 2025-08-14T21:53:40.4638634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4638709Z outputs = self.mobilebert( 2025-08-14T21:53:40.4639007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4639086Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4639391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4639465Z layer_outputs = layer_module( 2025-08-14T21:53:40.4639756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4639861Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4640154Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4640287Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4640594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4640724Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4641025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4641119Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4641124Z 2025-08-14T21:53:40.4641206Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4641324Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4641534Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4641610Z return mod(**inputs) 2025-08-14T21:53:40.4642054Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4642134Z outputs = self.mobilebert( 2025-08-14T21:53:40.4642435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4642587Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4642884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4643012Z layer_outputs = layer_module( 2025-08-14T21:53:40.4643330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4643442Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4643739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4643890Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4644200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4644316Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4644321Z 2025-08-14T21:53:40.4644416Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4644524Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4644733Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4644812Z return mod(**inputs) 2025-08-14T21:53:40.4645116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4645191Z outputs = self.mobilebert( 2025-08-14T21:53:40.4645493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4645571Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4645881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4645957Z layer_outputs = layer_module( 2025-08-14T21:53:40.4646249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4646356Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4646652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4646791Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4647095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4647224Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4647523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4647619Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4647622Z 2025-08-14T21:53:40.4647706Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4647821Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4648029Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4648107Z return mod(**inputs) 2025-08-14T21:53:40.4648399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4648483Z outputs = self.mobilebert( 2025-08-14T21:53:40.4648786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4648863Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4649175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4649273Z layer_outputs = layer_module( 2025-08-14T21:53:40.4649566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4649673Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4650004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4650121Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4650418Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4650554Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4650558Z 2025-08-14T21:53:40.4650649Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4650757Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4650967Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4651044Z return mod(**inputs) 2025-08-14T21:53:40.4651338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4651424Z outputs = self.mobilebert( 2025-08-14T21:53:40.4651717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4651794Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4652094Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4652170Z layer_outputs = layer_module( 2025-08-14T21:53:40.4652459Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4652568Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4652863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4653000Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4653296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4653425Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4653724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4653823Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4653826Z 2025-08-14T21:53:40.4653917Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4654026Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4654235Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4654311Z return mod(**inputs) 2025-08-14T21:53:40.4654602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4654677Z outputs = self.mobilebert( 2025-08-14T21:53:40.4654978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4655055Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4655359Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4655432Z layer_outputs = layer_module( 2025-08-14T21:53:40.4655724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:53:40.4655877Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:53:40.4656171Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4656319Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4656323Z 2025-08-14T21:53:40.4656404Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4656535Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4656750Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4656837Z return mod(**inputs) 2025-08-14T21:53:40.4657131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4657213Z outputs = self.mobilebert( 2025-08-14T21:53:40.4657506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4657590Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4657884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4657961Z layer_outputs = layer_module( 2025-08-14T21:53:40.4658261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.4658428Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.4658733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:53:40.4658862Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:53:40.4659158Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4659263Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4659267Z 2025-08-14T21:53:40.4659351Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4659463Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4659762Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4659842Z return mod(**inputs) 2025-08-14T21:53:40.4660154Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4660233Z outputs = self.mobilebert( 2025-08-14T21:53:40.4660533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4660623Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4660946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4661033Z layer_outputs = layer_module( 2025-08-14T21:53:40.4661326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.4661496Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.4661793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:53:40.4687477Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:53:40.4687873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:53:40.4688029Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4688448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4688560Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4688566Z 2025-08-14T21:53:40.4688674Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4688846Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4689085Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4689196Z return mod(**inputs) 2025-08-14T21:53:40.4689505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4689630Z outputs = self.mobilebert( 2025-08-14T21:53:40.4689930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4690015Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4690322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4690401Z layer_outputs = layer_module( 2025-08-14T21:53:40.4690706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:53:40.4690887Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:53:40.4691184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:53:40.4691314Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:53:40.4691609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:53:40.4691712Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:53:40.4692008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4692117Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4692121Z 2025-08-14T21:53:40.4692214Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4692293Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4692372Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4692464Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4692540Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4692624Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4692699Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4692777Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4692857Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4692932Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4693040Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4693256Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4693324Z return mod(**inputs) 2025-08-14T21:53:40.4693602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4693687Z outputs = self.mobilebert( 2025-08-14T21:53:40.4693969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4694058Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4694334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4694408Z layer_outputs = layer_module( 2025-08-14T21:53:40.4694691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:53:40.4694797Z self_attention_outputs = self.attention( 2025-08-14T21:53:40.4695081Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:53:40.4695209Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:53:40.4695511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:53:40.4695663Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4695944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4696056Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4696067Z 2025-08-14T21:53:40.4696144Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4696251Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4696462Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4696528Z return mod(**inputs) 2025-08-14T21:53:40.4696811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4696895Z outputs = self.mobilebert( 2025-08-14T21:53:40.4697189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4697277Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4697564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4697643Z layer_outputs = layer_module( 2025-08-14T21:53:40.4697942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4698045Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4698333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4698462Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4698756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4698886Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4698890Z 2025-08-14T21:53:40.4698972Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4699085Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4699302Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4699371Z return mod(**inputs) 2025-08-14T21:53:40.4699770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4699854Z outputs = self.mobilebert( 2025-08-14T21:53:40.4700143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4700233Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4700537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4700618Z layer_outputs = layer_module( 2025-08-14T21:53:40.4700931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4701038Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4701347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4701511Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4701822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4701961Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4702290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4702400Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4702405Z 2025-08-14T21:53:40.4702488Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4702616Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4702835Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4702904Z return mod(**inputs) 2025-08-14T21:53:40.4703198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4703284Z outputs = self.mobilebert( 2025-08-14T21:53:40.4703580Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4703661Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4703937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4704007Z layer_outputs = layer_module( 2025-08-14T21:53:40.4704302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4704455Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4704776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4704897Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4705189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4705316Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4705324Z 2025-08-14T21:53:40.4705406Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4705518Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4705741Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4705810Z return mod(**inputs) 2025-08-14T21:53:40.4706125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4706202Z outputs = self.mobilebert( 2025-08-14T21:53:40.4706501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4706595Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4706875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4706958Z layer_outputs = layer_module( 2025-08-14T21:53:40.4707238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4707334Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4707619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4707745Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4708021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4708185Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4708462Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4708563Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4708589Z 2025-08-14T21:53:40.4708669Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4708805Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4709014Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4709080Z return mod(**inputs) 2025-08-14T21:53:40.4709385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4709458Z outputs = self.mobilebert( 2025-08-14T21:53:40.4709739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4709820Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4710098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4710172Z layer_outputs = layer_module( 2025-08-14T21:53:40.4710461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4710555Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4710836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4710948Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4711228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4711352Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4711356Z 2025-08-14T21:53:40.4711434Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4711544Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4711742Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4711809Z return mod(**inputs) 2025-08-14T21:53:40.4712095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4712166Z outputs = self.mobilebert( 2025-08-14T21:53:40.4712443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4712524Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4712797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4712875Z layer_outputs = layer_module( 2025-08-14T21:53:40.4713147Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4713243Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4713529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4713654Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4713934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4714058Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4714333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4714448Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4714452Z 2025-08-14T21:53:40.4714532Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4714638Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4714861Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4714928Z return mod(**inputs) 2025-08-14T21:53:40.4715229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4715302Z outputs = self.mobilebert( 2025-08-14T21:53:40.4715598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4715679Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4715964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4716044Z layer_outputs = layer_module( 2025-08-14T21:53:40.4716327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:53:40.4716449Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:53:40.4716736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4716846Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4716850Z 2025-08-14T21:53:40.4716933Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4717046Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4717244Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4717318Z return mod(**inputs) 2025-08-14T21:53:40.4717598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4717670Z outputs = self.mobilebert( 2025-08-14T21:53:40.4717957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4718033Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4718311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4718390Z layer_outputs = layer_module( 2025-08-14T21:53:40.4718669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.4718839Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.4719119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:53:40.4719241Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:53:40.4719528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4719621Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4719625Z 2025-08-14T21:53:40.4719717Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4719825Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4720035Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4720114Z return mod(**inputs) 2025-08-14T21:53:40.4720417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4720492Z outputs = self.mobilebert( 2025-08-14T21:53:40.4720829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4720908Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4721209Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4721297Z layer_outputs = layer_module( 2025-08-14T21:53:40.4721643Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.4721821Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.4722140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:53:40.4722279Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:53:40.4722583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:53:40.4722711Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4723009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4723107Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4723111Z 2025-08-14T21:53:40.4723203Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4723315Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4723523Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4723604Z return mod(**inputs) 2025-08-14T21:53:40.4723907Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4723982Z outputs = self.mobilebert( 2025-08-14T21:53:40.4724286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4724363Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4724664Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4724742Z layer_outputs = layer_module( 2025-08-14T21:53:40.4725034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:53:40.4725215Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:53:40.4725521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:53:40.4725646Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:53:40.4725942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:53:40.4726035Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:53:40.4726334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4726432Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4726436Z 2025-08-14T21:53:40.4726521Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4726613Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4726694Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4726783Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4726862Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4726943Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4727029Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4727107Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4727204Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4727296Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4727406Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4727615Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4727713Z return mod(**inputs) 2025-08-14T21:53:40.4728030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4728113Z outputs = self.mobilebert( 2025-08-14T21:53:40.4728416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4728526Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4728830Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4728906Z layer_outputs = layer_module( 2025-08-14T21:53:40.4729206Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:53:40.4729298Z self_attention_outputs = self.attention( 2025-08-14T21:53:40.4729592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:53:40.4729739Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:53:40.4730014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:53:40.4730140Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4730424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4730515Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4730519Z 2025-08-14T21:53:40.4730604Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4730707Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4730904Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4730981Z return mod(**inputs) 2025-08-14T21:53:40.4731261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4731339Z outputs = self.mobilebert( 2025-08-14T21:53:40.4731621Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4731694Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4731981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4732052Z layer_outputs = layer_module( 2025-08-14T21:53:40.4732328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4732433Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4732712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4732835Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4733109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4733222Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4733226Z 2025-08-14T21:53:40.4733314Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4733417Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4733676Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4733745Z return mod(**inputs) 2025-08-14T21:53:40.4734029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4734134Z outputs = self.mobilebert( 2025-08-14T21:53:40.4734433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4734508Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4734796Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4734884Z layer_outputs = layer_module( 2025-08-14T21:53:40.4735166Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4735262Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4735537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4735671Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4735947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4736077Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4736351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4736442Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4736445Z 2025-08-14T21:53:40.4736533Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4736635Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4736840Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4736913Z return mod(**inputs) 2025-08-14T21:53:40.4737189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4737270Z outputs = self.mobilebert( 2025-08-14T21:53:40.4737545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4737619Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4737905Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4737977Z layer_outputs = layer_module( 2025-08-14T21:53:40.4738264Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4738359Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4738634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4738754Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4739029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4739141Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4739152Z 2025-08-14T21:53:40.4739233Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4739345Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4739645Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4739724Z return mod(**inputs) 2025-08-14T21:53:40.4740045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4740133Z outputs = self.mobilebert( 2025-08-14T21:53:40.4740427Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4740536Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4740851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4740931Z layer_outputs = layer_module( 2025-08-14T21:53:40.4741244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4741358Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4741641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4741925Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4742211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4742339Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4742625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4742715Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4742727Z 2025-08-14T21:53:40.4742808Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4742914Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4743120Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4743186Z return mod(**inputs) 2025-08-14T21:53:40.4743467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4743548Z outputs = self.mobilebert( 2025-08-14T21:53:40.4743825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4743911Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4744193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4744264Z layer_outputs = layer_module( 2025-08-14T21:53:40.4744553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4744649Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4744927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4745049Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4745327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4745444Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4745449Z 2025-08-14T21:53:40.4745527Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4745632Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4745839Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4745906Z return mod(**inputs) 2025-08-14T21:53:40.4746195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4746266Z outputs = self.mobilebert( 2025-08-14T21:53:40.4746933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4747020Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4747303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4747409Z layer_outputs = layer_module( 2025-08-14T21:53:40.4747733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4747830Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4748119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4748282Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4748549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4748679Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4748951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4749050Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4749054Z 2025-08-14T21:53:40.4749130Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4749233Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4749433Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4749498Z return mod(**inputs) 2025-08-14T21:53:40.4749769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4749847Z outputs = self.mobilebert( 2025-08-14T21:53:40.4750118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4750198Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4750470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4750540Z layer_outputs = layer_module( 2025-08-14T21:53:40.4750827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:53:40.4750949Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:53:40.4751238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4751349Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4751352Z 2025-08-14T21:53:40.4751431Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4751541Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4751745Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4751819Z return mod(**inputs) 2025-08-14T21:53:40.4752098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4752169Z outputs = self.mobilebert( 2025-08-14T21:53:40.4752452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4752523Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4752793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4752869Z layer_outputs = layer_module( 2025-08-14T21:53:40.4753141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.4753326Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.4753592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:53:40.4753728Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:53:40.4754013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4754103Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4754121Z 2025-08-14T21:53:40.4754198Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4754305Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4754495Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4754567Z return mod(**inputs) 2025-08-14T21:53:40.4754837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4754906Z outputs = self.mobilebert( 2025-08-14T21:53:40.4755181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4755255Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4755534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4755603Z layer_outputs = layer_module( 2025-08-14T21:53:40.4755872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.4756032Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.4756302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:53:40.4756419Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:53:40.4756698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:53:40.4756817Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4757090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4757178Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4757183Z 2025-08-14T21:53:40.4757259Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4757368Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4757558Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4757627Z return mod(**inputs) 2025-08-14T21:53:40.4757895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4757963Z outputs = self.mobilebert( 2025-08-14T21:53:40.4758239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4758310Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4758577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4758653Z layer_outputs = layer_module( 2025-08-14T21:53:40.4758919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:53:40.4759081Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:53:40.4759381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:53:40.4759489Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:53:40.4759766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:53:40.4759869Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:53:40.4760166Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4760253Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4760273Z 2025-08-14T21:53:40.4760350Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4760432Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4760505Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4760577Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4760659Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4760732Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4760812Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4760885Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4760959Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4761038Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4761138Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4761330Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4761406Z return mod(**inputs) 2025-08-14T21:53:40.4761681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4761751Z outputs = self.mobilebert( 2025-08-14T21:53:40.4762032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4762106Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4762389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4762462Z layer_outputs = layer_module( 2025-08-14T21:53:40.4762744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:53:40.4762840Z self_attention_outputs = self.attention( 2025-08-14T21:53:40.4763111Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:53:40.4763239Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:53:40.4763508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:53:40.4763631Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4763908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4763995Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4764000Z 2025-08-14T21:53:40.4764083Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4764186Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4764379Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4764449Z return mod(**inputs) 2025-08-14T21:53:40.4764716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4764784Z outputs = self.mobilebert( 2025-08-14T21:53:40.4765063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4765157Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4765443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4765549Z layer_outputs = layer_module( 2025-08-14T21:53:40.4765826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4765943Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4766227Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4766353Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4766636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4766748Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4766752Z 2025-08-14T21:53:40.4766838Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4766940Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4767135Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4767211Z return mod(**inputs) 2025-08-14T21:53:40.4767491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4767569Z outputs = self.mobilebert( 2025-08-14T21:53:40.4767846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4767921Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4768205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4768276Z layer_outputs = layer_module( 2025-08-14T21:53:40.4768552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4768656Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4768934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4769067Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4769343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4769467Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4769755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4769848Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4769851Z 2025-08-14T21:53:40.4769937Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4770042Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4770241Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4770315Z return mod(**inputs) 2025-08-14T21:53:40.4770593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4770664Z outputs = self.mobilebert( 2025-08-14T21:53:40.4770951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4771025Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4771307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4771399Z layer_outputs = layer_module( 2025-08-14T21:53:40.4771672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4771775Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4772098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4772218Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4772497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4772624Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4772627Z 2025-08-14T21:53:40.4772714Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4772815Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4773013Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4773086Z return mod(**inputs) 2025-08-14T21:53:40.4773363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4773441Z outputs = self.mobilebert( 2025-08-14T21:53:40.4773719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4773792Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4774078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4774149Z layer_outputs = layer_module( 2025-08-14T21:53:40.4774432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4774527Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4774804Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4774933Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4775217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4775338Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4775626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4775718Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4775722Z 2025-08-14T21:53:40.4775806Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4775909Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4776108Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4776181Z return mod(**inputs) 2025-08-14T21:53:40.4776460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4776539Z outputs = self.mobilebert( 2025-08-14T21:53:40.4776820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4776895Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4777185Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4777257Z layer_outputs = layer_module( 2025-08-14T21:53:40.4777541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4777665Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4777944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4778061Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4778372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4778483Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4778486Z 2025-08-14T21:53:40.4778574Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4778702Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4778909Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4778976Z return mod(**inputs) 2025-08-14T21:53:40.4779286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4779370Z outputs = self.mobilebert( 2025-08-14T21:53:40.4779754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4779845Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4780164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4780242Z layer_outputs = layer_module( 2025-08-14T21:53:40.4780554Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4780657Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4780953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4781089Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4781370Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4781504Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4781789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4781883Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4781887Z 2025-08-14T21:53:40.4781977Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4782084Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4782281Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4782357Z return mod(**inputs) 2025-08-14T21:53:40.4782638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4782721Z outputs = self.mobilebert( 2025-08-14T21:53:40.4783004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4783080Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4783371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4783443Z layer_outputs = layer_module( 2025-08-14T21:53:40.4783730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:53:40.4783852Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:53:40.4784133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4784281Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4784286Z 2025-08-14T21:53:40.4784367Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4784471Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4784700Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4784765Z return mod(**inputs) 2025-08-14T21:53:40.4785070Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4785143Z outputs = self.mobilebert( 2025-08-14T21:53:40.4785447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4785529Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4785805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4785885Z layer_outputs = layer_module( 2025-08-14T21:53:40.4786159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.4786319Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.4786604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:53:40.4786725Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:53:40.4787004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4787103Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4787106Z 2025-08-14T21:53:40.4787186Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4787297Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4787498Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4787567Z return mod(**inputs) 2025-08-14T21:53:40.4787864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4787958Z outputs = self.mobilebert( 2025-08-14T21:53:40.4788252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4788338Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4788629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4788705Z layer_outputs = layer_module( 2025-08-14T21:53:40.4789004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.4789170Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.4789468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:53:40.4789599Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:53:40.4789892Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:53:40.4790026Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4790319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4790416Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4790428Z 2025-08-14T21:53:40.4790510Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4790638Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4790855Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4790923Z return mod(**inputs) 2025-08-14T21:53:40.4791238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4791338Z outputs = self.mobilebert( 2025-08-14T21:53:40.4791643Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4791744Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4792045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4792120Z layer_outputs = layer_module( 2025-08-14T21:53:40.4792425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:53:40.4792597Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:53:40.4792906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:53:40.4793031Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:53:40.4793325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:53:40.4793426Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:53:40.4793717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4793810Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4793814Z 2025-08-14T21:53:40.4793905Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4793986Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4794073Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4794152Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4794232Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4794318Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4794397Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4794477Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4794565Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4794643Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4794754Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4794971Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4795039Z return mod(**inputs) 2025-08-14T21:53:40.4795348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4795424Z outputs = self.mobilebert( 2025-08-14T21:53:40.4795715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4795803Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4796102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4796177Z layer_outputs = layer_module( 2025-08-14T21:53:40.4796477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:53:40.4796572Z self_attention_outputs = self.attention( 2025-08-14T21:53:40.4796871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:53:40.4797019Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:53:40.4797311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:53:40.4797449Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4797763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4797882Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4797887Z 2025-08-14T21:53:40.4797969Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4798096Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4798313Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4798398Z return mod(**inputs) 2025-08-14T21:53:40.4798694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4798770Z outputs = self.mobilebert( 2025-08-14T21:53:40.4799068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4799149Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4799444Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4799528Z layer_outputs = layer_module( 2025-08-14T21:53:40.4799832Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4799941Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4800237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4800358Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4800661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4800776Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4800781Z 2025-08-14T21:53:40.4800871Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4800983Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4801192Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4801268Z return mod(**inputs) 2025-08-14T21:53:40.4801564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4801638Z outputs = self.mobilebert( 2025-08-14T21:53:40.4801944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4802022Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4802326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4802403Z layer_outputs = layer_module( 2025-08-14T21:53:40.4802699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4802810Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4803114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4803258Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4803563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4803712Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4804013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4804107Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4804130Z 2025-08-14T21:53:40.4804214Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4804357Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4804567Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4804643Z return mod(**inputs) 2025-08-14T21:53:40.4804952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4805027Z outputs = self.mobilebert( 2025-08-14T21:53:40.4805330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4805407Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4805706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4805781Z layer_outputs = layer_module( 2025-08-14T21:53:40.4806074Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4806180Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4806483Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4806600Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4806907Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4807022Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4807026Z 2025-08-14T21:53:40.4807116Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4807226Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4807434Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4807513Z return mod(**inputs) 2025-08-14T21:53:40.4807808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4807891Z outputs = self.mobilebert( 2025-08-14T21:53:40.4808182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4808257Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4808556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4808633Z layer_outputs = layer_module( 2025-08-14T21:53:40.4808921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4809026Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4809332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4809469Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4809772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4809902Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4810201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4810317Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4810322Z 2025-08-14T21:53:40.4810412Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4810523Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4810757Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4810833Z return mod(**inputs) 2025-08-14T21:53:40.4811145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4811221Z outputs = self.mobilebert( 2025-08-14T21:53:40.4811544Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4811621Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4811917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4811994Z layer_outputs = layer_module( 2025-08-14T21:53:40.4812282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4812393Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4812686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4812810Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4813100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4813216Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4813220Z 2025-08-14T21:53:40.4813312Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4813420Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4813631Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4813711Z return mod(**inputs) 2025-08-14T21:53:40.4814000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4814084Z outputs = self.mobilebert( 2025-08-14T21:53:40.4814374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4814451Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4814750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4814825Z layer_outputs = layer_module( 2025-08-14T21:53:40.4815112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4815222Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4815512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4815650Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4815940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4816066Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4816374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4816471Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4816475Z 2025-08-14T21:53:40.4816564Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4816672Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4816901Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4816980Z return mod(**inputs) 2025-08-14T21:53:40.4817283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4817386Z outputs = self.mobilebert( 2025-08-14T21:53:40.4817710Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4817789Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4818107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4818182Z layer_outputs = layer_module( 2025-08-14T21:53:40.4818481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:53:40.4818621Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:53:40.4818927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4819052Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4819056Z 2025-08-14T21:53:40.4819140Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4819250Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4819467Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4819607Z return mod(**inputs) 2025-08-14T21:53:40.4819918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4820003Z outputs = self.mobilebert( 2025-08-14T21:53:40.4820318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4820406Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4820715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4820794Z layer_outputs = layer_module( 2025-08-14T21:53:40.4821122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.4821288Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.4821587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:53:40.4821715Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:53:40.4822013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4822119Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4822123Z 2025-08-14T21:53:40.4822204Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4822322Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4822535Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4822608Z return mod(**inputs) 2025-08-14T21:53:40.4822913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4822990Z outputs = self.mobilebert( 2025-08-14T21:53:40.4823288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4823373Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4823702Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4823785Z layer_outputs = layer_module( 2025-08-14T21:53:40.4824083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.4824256Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.4824557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:53:40.4824685Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:53:40.4825019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:53:40.4825144Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4825449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4825553Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4825557Z 2025-08-14T21:53:40.4825639Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4825751Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4825969Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4826037Z return mod(**inputs) 2025-08-14T21:53:40.4826343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4826420Z outputs = self.mobilebert( 2025-08-14T21:53:40.4826728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4826815Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4827122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4827206Z layer_outputs = layer_module( 2025-08-14T21:53:40.4827500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:53:40.4827673Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:53:40.4827973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:53:40.4828089Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:53:40.4828394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:53:40.4828494Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:53:40.4828791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4828895Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4828899Z 2025-08-14T21:53:40.4828981Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4829064Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4829153Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4829235Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4829316Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4829404Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4829486Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4829575Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4829654Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4829733Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4829850Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4830087Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4830158Z return mod(**inputs) 2025-08-14T21:53:40.4830461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4830562Z outputs = self.mobilebert( 2025-08-14T21:53:40.4830882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4830963Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4831251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4831442Z layer_outputs = layer_module( 2025-08-14T21:53:40.4831722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:53:40.4831809Z self_attention_outputs = self.attention( 2025-08-14T21:53:40.4832092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:53:40.4832214Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:53:40.4832497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:53:40.4832620Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4832898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4832999Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4833002Z 2025-08-14T21:53:40.4833079Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4833192Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4833390Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4833457Z return mod(**inputs) 2025-08-14T21:53:40.4833739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4833811Z outputs = self.mobilebert( 2025-08-14T21:53:40.4834090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4834172Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4834444Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4834525Z layer_outputs = layer_module( 2025-08-14T21:53:40.4834807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4834902Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4835184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4835293Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4835577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4835689Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4835692Z 2025-08-14T21:53:40.4835770Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4835883Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4836079Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4836143Z return mod(**inputs) 2025-08-14T21:53:40.4836452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4836525Z outputs = self.mobilebert( 2025-08-14T21:53:40.4836824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4836922Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4837233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4837318Z layer_outputs = layer_module( 2025-08-14T21:53:40.4837606Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4837730Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4838021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4838153Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4838454Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4838581Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4838875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4838978Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4838982Z 2025-08-14T21:53:40.4839063Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4839179Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4839390Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4839458Z return mod(**inputs) 2025-08-14T21:53:40.4839756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4839830Z outputs = self.mobilebert( 2025-08-14T21:53:40.4840126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4840203Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4840496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4840577Z layer_outputs = layer_module( 2025-08-14T21:53:40.4840867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4840967Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4841264Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4841381Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4841681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4842015Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4842027Z 2025-08-14T21:53:40.4842117Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4842237Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4842447Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4842525Z return mod(**inputs) 2025-08-14T21:53:40.4842821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4842898Z outputs = self.mobilebert( 2025-08-14T21:53:40.4843275Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4843358Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4843654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4843765Z layer_outputs = layer_module( 2025-08-14T21:53:40.4844078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4844187Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4844480Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4844638Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4844941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4845070Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4845371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4845471Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4845475Z 2025-08-14T21:53:40.4845560Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4845681Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4845887Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4845958Z return mod(**inputs) 2025-08-14T21:53:40.4846257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4846332Z outputs = self.mobilebert( 2025-08-14T21:53:40.4846632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4846708Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4847006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4847092Z layer_outputs = layer_module( 2025-08-14T21:53:40.4847385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4847492Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4847793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4847910Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4848213Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4848330Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4848333Z 2025-08-14T21:53:40.4848416Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4848533Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4848741Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4848817Z return mod(**inputs) 2025-08-14T21:53:40.4849120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4849193Z outputs = self.mobilebert( 2025-08-14T21:53:40.4849496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4849572Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4849905Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4849984Z layer_outputs = layer_module( 2025-08-14T21:53:40.4850275Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4850403Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4850710Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4850842Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4851141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4851294Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4851574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4851664Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4851668Z 2025-08-14T21:53:40.4851746Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4851864Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4852072Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4852147Z return mod(**inputs) 2025-08-14T21:53:40.4852439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4852521Z outputs = self.mobilebert( 2025-08-14T21:53:40.4852804Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4852876Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4853151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4853231Z layer_outputs = layer_module( 2025-08-14T21:53:40.4853508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:53:40.4853639Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:53:40.4853934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4854052Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4854055Z 2025-08-14T21:53:40.4854151Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4854259Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4854472Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4854540Z return mod(**inputs) 2025-08-14T21:53:40.4854842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4854922Z outputs = self.mobilebert( 2025-08-14T21:53:40.4855211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4855289Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4855590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4855664Z layer_outputs = layer_module( 2025-08-14T21:53:40.4855963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.4856129Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.4856463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:53:40.4856600Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:53:40.4856891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4857014Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4857017Z 2025-08-14T21:53:40.4857115Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4857226Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4857441Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4857528Z return mod(**inputs) 2025-08-14T21:53:40.4857817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4857899Z outputs = self.mobilebert( 2025-08-14T21:53:40.4858190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4858274Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4858565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4858640Z layer_outputs = layer_module( 2025-08-14T21:53:40.4858938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.4859102Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.4859398Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:53:40.4859581Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:53:40.4859903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:53:40.4860042Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4860345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4860457Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4860471Z 2025-08-14T21:53:40.4860556Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4860666Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4860881Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4860955Z return mod(**inputs) 2025-08-14T21:53:40.4861247Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4861331Z outputs = self.mobilebert( 2025-08-14T21:53:40.4861624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4861712Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4862008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4862086Z layer_outputs = layer_module( 2025-08-14T21:53:40.4862391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:53:40.4862563Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:53:40.4862859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:53:40.4862983Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:53:40.4863308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:53:40.4863412Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:53:40.4863705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4863824Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4863847Z 2025-08-14T21:53:40.4863939Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4864021Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4864108Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4864209Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4864288Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4864376Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4864456Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4864536Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4864633Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4864711Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4864820Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4865035Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4865106Z return mod(**inputs) 2025-08-14T21:53:40.4865404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4865479Z outputs = self.mobilebert( 2025-08-14T21:53:40.4865772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4865858Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4866148Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4866226Z layer_outputs = layer_module( 2025-08-14T21:53:40.4866522Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:53:40.4866613Z self_attention_outputs = self.attention( 2025-08-14T21:53:40.4866912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:53:40.4867044Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:53:40.4867336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:53:40.4867476Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4867767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4867872Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4867876Z 2025-08-14T21:53:40.4867960Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4868069Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4868287Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4868359Z return mod(**inputs) 2025-08-14T21:53:40.4868650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4868732Z outputs = self.mobilebert( 2025-08-14T21:53:40.4869023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4869107Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4869408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4869503Z layer_outputs = layer_module( 2025-08-14T21:53:40.4869807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4869905Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4870219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4870355Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4870657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4870799Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4870802Z 2025-08-14T21:53:40.4870884Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4870992Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4871210Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4871278Z return mod(**inputs) 2025-08-14T21:53:40.4871575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4871651Z outputs = self.mobilebert( 2025-08-14T21:53:40.4871945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4872028Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4872323Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4872404Z layer_outputs = layer_module( 2025-08-14T21:53:40.4872693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4872791Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4873086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4873218Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4873513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4873648Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4873949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4874053Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4874057Z 2025-08-14T21:53:40.4874139Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4874247Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4874461Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4874529Z return mod(**inputs) 2025-08-14T21:53:40.4874827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4874903Z outputs = self.mobilebert( 2025-08-14T21:53:40.4875197Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4875283Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4875575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4875650Z layer_outputs = layer_module( 2025-08-14T21:53:40.4875951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4876072Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4876369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4876486Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4876814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4876942Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4876946Z 2025-08-14T21:53:40.4877030Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4877173Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4877385Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4877454Z return mod(**inputs) 2025-08-14T21:53:40.4877764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4877840Z outputs = self.mobilebert( 2025-08-14T21:53:40.4878143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4878230Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4878532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4878615Z layer_outputs = layer_module( 2025-08-14T21:53:40.4878920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4879020Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4879326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4879456Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4879768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4879895Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4880202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4880307Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4880311Z 2025-08-14T21:53:40.4880392Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4880504Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4880721Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4880788Z return mod(**inputs) 2025-08-14T21:53:40.4881098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4881171Z outputs = self.mobilebert( 2025-08-14T21:53:40.4881474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4881561Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4881866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4881948Z layer_outputs = layer_module( 2025-08-14T21:53:40.4882250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4882350Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4882655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4882795Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4883098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4883240Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4883244Z 2025-08-14T21:53:40.4883327Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4883458Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4883668Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4883755Z return mod(**inputs) 2025-08-14T21:53:40.4884057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4884130Z outputs = self.mobilebert( 2025-08-14T21:53:40.4884439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4884518Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4884818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4884902Z layer_outputs = layer_module( 2025-08-14T21:53:40.4885204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4885300Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4885608Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4885738Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4886045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4886174Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4886478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4886583Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4886589Z 2025-08-14T21:53:40.4886671Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4886791Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4886998Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4887068Z return mod(**inputs) 2025-08-14T21:53:40.4887370Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4887445Z outputs = self.mobilebert( 2025-08-14T21:53:40.4887747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4887833Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4888134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4888217Z layer_outputs = layer_module( 2025-08-14T21:53:40.4888510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:53:40.4888636Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:53:40.4888939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4889055Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4889059Z 2025-08-14T21:53:40.4889147Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4889287Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4889497Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4889571Z return mod(**inputs) 2025-08-14T21:53:40.4889874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4889970Z outputs = self.mobilebert( 2025-08-14T21:53:40.4890292Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4890371Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4890686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4890760Z layer_outputs = layer_module( 2025-08-14T21:53:40.4891050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.4891223Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.4891524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:53:40.4891660Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:53:40.4891952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4892049Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4892054Z 2025-08-14T21:53:40.4892144Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4892252Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4892460Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4892537Z return mod(**inputs) 2025-08-14T21:53:40.4892840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4892921Z outputs = self.mobilebert( 2025-08-14T21:53:40.4893208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4893286Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4893587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4893662Z layer_outputs = layer_module( 2025-08-14T21:53:40.4893965Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.4894129Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.4894434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:53:40.4894568Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:53:40.4894859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:53:40.4894987Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4895287Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4895383Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4895388Z 2025-08-14T21:53:40.4895480Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4895588Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4895798Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4895899Z return mod(**inputs) 2025-08-14T21:53:40.4896195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4896278Z outputs = self.mobilebert( 2025-08-14T21:53:40.4896592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4896685Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4896990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4897084Z layer_outputs = layer_module( 2025-08-14T21:53:40.4897388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:53:40.4897566Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:53:40.4897923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:53:40.4898046Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:53:40.4898337Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:53:40.4898431Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:53:40.4898735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4898831Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4898835Z 2025-08-14T21:53:40.4898926Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4899007Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4899087Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4899176Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4899259Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4899339Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4899431Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4899585Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4899683Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4899776Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4899893Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4900122Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4900194Z return mod(**inputs) 2025-08-14T21:53:40.4900501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4900588Z outputs = self.mobilebert( 2025-08-14T21:53:40.4900893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4900975Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4901288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4901367Z layer_outputs = layer_module( 2025-08-14T21:53:40.4901682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:53:40.4901778Z self_attention_outputs = self.attention( 2025-08-14T21:53:40.4902083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:53:40.4902228Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:53:40.4902529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:53:40.4902696Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4903001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4903099Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4903125Z 2025-08-14T21:53:40.4903218Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4903360Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4903586Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4903663Z return mod(**inputs) 2025-08-14T21:53:40.4903988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4904073Z outputs = self.mobilebert( 2025-08-14T21:53:40.4904376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4904456Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4904776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4904853Z layer_outputs = layer_module( 2025-08-14T21:53:40.4905164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4905266Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4905569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4905698Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4906000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4906118Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4906129Z 2025-08-14T21:53:40.4906212Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4906322Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4906543Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4906617Z return mod(**inputs) 2025-08-14T21:53:40.4906918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4907001Z outputs = self.mobilebert( 2025-08-14T21:53:40.4907305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4907390Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4907695Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4907773Z layer_outputs = layer_module( 2025-08-14T21:53:40.4908081Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4908182Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4908485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4908629Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4908930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4909069Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4909367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4909490Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4909495Z 2025-08-14T21:53:40.4909581Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4909682Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4909884Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4909968Z return mod(**inputs) 2025-08-14T21:53:40.4910268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4910348Z outputs = self.mobilebert( 2025-08-14T21:53:40.4910641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4910713Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4910998Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4911069Z layer_outputs = layer_module( 2025-08-14T21:53:40.4911351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4911448Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4911739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4911863Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4912149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4912268Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4912271Z 2025-08-14T21:53:40.4912350Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4912452Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4912659Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4912724Z return mod(**inputs) 2025-08-14T21:53:40.4913001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4913082Z outputs = self.mobilebert( 2025-08-14T21:53:40.4913359Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4913440Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4913714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4913785Z layer_outputs = layer_module( 2025-08-14T21:53:40.4914067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4914163Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4914444Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4914566Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4914845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4914973Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4915249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4915341Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4915352Z 2025-08-14T21:53:40.4915429Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4915530Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4915753Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4915821Z return mod(**inputs) 2025-08-14T21:53:40.4916098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4916201Z outputs = self.mobilebert( 2025-08-14T21:53:40.4916501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4916582Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4916874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4916944Z layer_outputs = layer_module( 2025-08-14T21:53:40.4917230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4917323Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4917599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4917718Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4917996Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4918111Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4918115Z 2025-08-14T21:53:40.4918194Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4918298Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4918508Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4918573Z return mod(**inputs) 2025-08-14T21:53:40.4918863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4918933Z outputs = self.mobilebert( 2025-08-14T21:53:40.4919207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4919288Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4919565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4919636Z layer_outputs = layer_module( 2025-08-14T21:53:40.4919918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4920013Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4920298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4920421Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4920698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4920828Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4921107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4921206Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4921209Z 2025-08-14T21:53:40.4921290Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4921392Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4921597Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4921661Z return mod(**inputs) 2025-08-14T21:53:40.4921969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4922050Z outputs = self.mobilebert( 2025-08-14T21:53:40.4922325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4922424Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4922716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4922788Z layer_outputs = layer_module( 2025-08-14T21:53:40.4923069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:53:40.4923205Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:53:40.4923493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4923604Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4923607Z 2025-08-14T21:53:40.4923687Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4923797Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4923996Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4924062Z return mod(**inputs) 2025-08-14T21:53:40.4924351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4924424Z outputs = self.mobilebert( 2025-08-14T21:53:40.4924709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4924783Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4925061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4925141Z layer_outputs = layer_module( 2025-08-14T21:53:40.4925418Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.4925585Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.4925865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:53:40.4925990Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:53:40.4926281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4926372Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4926375Z 2025-08-14T21:53:40.4926454Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4926568Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4926764Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4926837Z return mod(**inputs) 2025-08-14T21:53:40.4927116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4927191Z outputs = self.mobilebert( 2025-08-14T21:53:40.4927479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4927554Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4927862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4927938Z layer_outputs = layer_module( 2025-08-14T21:53:40.4928268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.4928441Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.4928743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:53:40.4928902Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:53:40.4929228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:53:40.4929358Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4929676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4929771Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4929775Z 2025-08-14T21:53:40.4929861Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4929981Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4930185Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4930260Z return mod(**inputs) 2025-08-14T21:53:40.4930555Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4930638Z outputs = self.mobilebert( 2025-08-14T21:53:40.4930942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4931021Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4931310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4931392Z layer_outputs = layer_module( 2025-08-14T21:53:40.4931683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:53:40.4931864Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:53:40.4932157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:53:40.4932276Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:53:40.4932578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:53:40.4932671Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:53:40.4932983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4933078Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4933081Z 2025-08-14T21:53:40.4933165Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4933255Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4933335Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4933416Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4933506Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4933586Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4933673Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4933754Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4933834Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4933919Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4934029Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4934237Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4934314Z return mod(**inputs) 2025-08-14T21:53:40.4934637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4934714Z outputs = self.mobilebert( 2025-08-14T21:53:40.4935014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4935109Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4935454Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4935531Z layer_outputs = layer_module( 2025-08-14T21:53:40.4935838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:53:40.4935966Z self_attention_outputs = self.attention( 2025-08-14T21:53:40.4936269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:53:40.4936407Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:53:40.4936701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:53:40.4936832Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4937138Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4937233Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4937237Z 2025-08-14T21:53:40.4937326Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4937436Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4937643Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4937719Z return mod(**inputs) 2025-08-14T21:53:40.4938015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4938089Z outputs = self.mobilebert( 2025-08-14T21:53:40.4938392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4938471Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4938772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4938846Z layer_outputs = layer_module( 2025-08-14T21:53:40.4939141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4939252Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4939632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4939765Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4940079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4940199Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4940205Z 2025-08-14T21:53:40.4940299Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4940411Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4940624Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4940705Z return mod(**inputs) 2025-08-14T21:53:40.4941011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4941095Z outputs = self.mobilebert( 2025-08-14T21:53:40.4941432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4941512Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4941939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4942044Z layer_outputs = layer_module( 2025-08-14T21:53:40.4942392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4942506Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4942803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4942973Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4943269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4943397Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4943705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4943803Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4943809Z 2025-08-14T21:53:40.4943901Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4944012Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4944227Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4944307Z return mod(**inputs) 2025-08-14T21:53:40.4944609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4944684Z outputs = self.mobilebert( 2025-08-14T21:53:40.4944994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4945073Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4945379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4945456Z layer_outputs = layer_module( 2025-08-14T21:53:40.4945754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4945862Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4946160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4946288Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4946584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4946703Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4946706Z 2025-08-14T21:53:40.4946798Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4946906Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4947124Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4947201Z return mod(**inputs) 2025-08-14T21:53:40.4947500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4947582Z outputs = self.mobilebert( 2025-08-14T21:53:40.4947880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4947958Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4948261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4948372Z layer_outputs = layer_module( 2025-08-14T21:53:40.4948671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4948798Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4949108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4949250Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4949546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4949692Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4949991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4950088Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4950091Z 2025-08-14T21:53:40.4950181Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4950289Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4950501Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4950579Z return mod(**inputs) 2025-08-14T21:53:40.4950884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4950966Z outputs = self.mobilebert( 2025-08-14T21:53:40.4951260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4951338Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4951645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4951720Z layer_outputs = layer_module( 2025-08-14T21:53:40.4952014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4952121Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4952413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4952538Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4952829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4952945Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4952948Z 2025-08-14T21:53:40.4953039Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4953147Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4953364Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4953433Z return mod(**inputs) 2025-08-14T21:53:40.4953724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4953807Z outputs = self.mobilebert( 2025-08-14T21:53:40.4954103Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4954179Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4954478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4954550Z layer_outputs = layer_module( 2025-08-14T21:53:40.4954854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4954986Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4955277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4955433Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4955738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4955874Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4956176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4956293Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4956296Z 2025-08-14T21:53:40.4956388Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4956496Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4956710Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4956788Z return mod(**inputs) 2025-08-14T21:53:40.4957087Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4957171Z outputs = self.mobilebert( 2025-08-14T21:53:40.4957470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4957547Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4957857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4957930Z layer_outputs = layer_module( 2025-08-14T21:53:40.4958245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:53:40.4958377Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:53:40.4958677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4958802Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4958806Z 2025-08-14T21:53:40.4958889Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4958998Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4959218Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4959289Z return mod(**inputs) 2025-08-14T21:53:40.4959603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4959677Z outputs = self.mobilebert( 2025-08-14T21:53:40.4959987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4960073Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4960372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4960456Z layer_outputs = layer_module( 2025-08-14T21:53:40.4960757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.4960926Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.4961237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:53:40.4961365Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:53:40.4961684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4961791Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4961795Z 2025-08-14T21:53:40.4961879Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4962013Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4962224Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4962311Z return mod(**inputs) 2025-08-14T21:53:40.4962611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4962704Z outputs = self.mobilebert( 2025-08-14T21:53:40.4962999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4963077Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4963369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4963452Z layer_outputs = layer_module( 2025-08-14T21:53:40.4963742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.4963910Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.4964211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:53:40.4964338Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:53:40.4964634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:53:40.4964760Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4965051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4965152Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4965156Z 2025-08-14T21:53:40.4965238Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4965356Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4965566Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4965635Z return mod(**inputs) 2025-08-14T21:53:40.4965935Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4966012Z outputs = self.mobilebert( 2025-08-14T21:53:40.4966303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4966387Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4966676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4966758Z layer_outputs = layer_module( 2025-08-14T21:53:40.4967048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:53:40.4967220Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:53:40.4967520Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:53:40.4967638Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:53:40.4967936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:53:40.4968029Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:53:40.4968341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4968445Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4968449Z 2025-08-14T21:53:40.4968533Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4968638Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4968727Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4968827Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4968917Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4968995Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4969096Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4969180Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4969259Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4969338Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4969455Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4969667Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4969743Z return mod(**inputs) 2025-08-14T21:53:40.4970034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4970110Z outputs = self.mobilebert( 2025-08-14T21:53:40.4970409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4970484Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4970776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4970860Z layer_outputs = layer_module( 2025-08-14T21:53:40.4971151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:53:40.4971250Z self_attention_outputs = self.attention( 2025-08-14T21:53:40.4971540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:53:40.4971668Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:53:40.4971972Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:53:40.4972102Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4972413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4972510Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4972513Z 2025-08-14T21:53:40.4972596Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4972712Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4972921Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4972991Z return mod(**inputs) 2025-08-14T21:53:40.4973293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4973371Z outputs = self.mobilebert( 2025-08-14T21:53:40.4973671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4973747Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4974048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4974135Z layer_outputs = layer_module( 2025-08-14T21:53:40.4974435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4974555Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4974858Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4974974Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4975312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4975432Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4975435Z 2025-08-14T21:53:40.4975518Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4975663Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4975871Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4975947Z return mod(**inputs) 2025-08-14T21:53:40.4976254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4976329Z outputs = self.mobilebert( 2025-08-14T21:53:40.4976631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4976710Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4977003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4977085Z layer_outputs = layer_module( 2025-08-14T21:53:40.4977376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4977486Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4977781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4977914Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4978218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4978344Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4978647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4978744Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4978747Z 2025-08-14T21:53:40.4978830Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4978951Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4979159Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4979228Z return mod(**inputs) 2025-08-14T21:53:40.4979602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4979687Z outputs = self.mobilebert( 2025-08-14T21:53:40.4979992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4980072Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4980367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4980457Z layer_outputs = layer_module( 2025-08-14T21:53:40.4980767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4980882Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4981195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4981343Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4981653Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4981769Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4981802Z 2025-08-14T21:53:40.4981896Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4982020Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4982229Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4982324Z return mod(**inputs) 2025-08-14T21:53:40.4982614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4982689Z outputs = self.mobilebert( 2025-08-14T21:53:40.4982993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4983070Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4983372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4983449Z layer_outputs = layer_module( 2025-08-14T21:53:40.4983742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4983849Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4984141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4984274Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4984571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4984699Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4984999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4985095Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4985098Z 2025-08-14T21:53:40.4985181Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4985300Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4985508Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4985586Z return mod(**inputs) 2025-08-14T21:53:40.4985876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4985961Z outputs = self.mobilebert( 2025-08-14T21:53:40.4986241Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4986313Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4986587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4986666Z layer_outputs = layer_module( 2025-08-14T21:53:40.4986941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4987041Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4987313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.4987424Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.4987709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4987839Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4987844Z 2025-08-14T21:53:40.4987930Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4988033Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4988247Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4988318Z return mod(**inputs) 2025-08-14T21:53:40.4988610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4988683Z outputs = self.mobilebert( 2025-08-14T21:53:40.4988981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4989053Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4989335Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4989405Z layer_outputs = layer_module( 2025-08-14T21:53:40.4989679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.4989779Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.4990054Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.4990185Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.4990456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.4990577Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4990856Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4990947Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4990951Z 2025-08-14T21:53:40.4991038Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4991141Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4991338Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4991410Z return mod(**inputs) 2025-08-14T21:53:40.4991691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4991761Z outputs = self.mobilebert( 2025-08-14T21:53:40.4992042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4992114Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4992396Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4992468Z layer_outputs = layer_module( 2025-08-14T21:53:40.4992739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:53:40.4992869Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:53:40.4993142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.4993249Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.4993260Z 2025-08-14T21:53:40.4993342Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4993444Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4993649Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4993714Z return mod(**inputs) 2025-08-14T21:53:40.4994017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4994098Z outputs = self.mobilebert( 2025-08-14T21:53:40.4994371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4994478Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4994772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4994844Z layer_outputs = layer_module( 2025-08-14T21:53:40.4995161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.4995319Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.4995598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:53:40.4995727Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:53:40.4996007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4996110Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4996114Z 2025-08-14T21:53:40.4996195Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4996297Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4996510Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4996578Z return mod(**inputs) 2025-08-14T21:53:40.4996866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.4996935Z outputs = self.mobilebert( 2025-08-14T21:53:40.4997213Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.4997295Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.4997572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.4997645Z layer_outputs = layer_module( 2025-08-14T21:53:40.4997934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.4998089Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.4998378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:53:40.4998499Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:53:40.4998780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:53:40.4998907Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.4999184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.4999283Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.4999288Z 2025-08-14T21:53:40.4999368Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.4999471Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.4999680Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.4999746Z return mod(**inputs) 2025-08-14T21:53:40.5000026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.5000104Z outputs = self.mobilebert( 2025-08-14T21:53:40.5000415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.5000500Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.5000780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.5000867Z layer_outputs = layer_module( 2025-08-14T21:53:40.5001170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:53:40.5001344Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:53:40.5001622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:53:40.5001727Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:53:40.5002000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:53:40.5002093Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:53:40.5002366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.5002462Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.5002466Z 2025-08-14T21:53:40.5002545Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5002622Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5002707Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5002781Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5002855Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5002935Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5003009Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5003082Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5003164Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5003236Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5003341Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.5003536Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.5003602Z return mod(**inputs) 2025-08-14T21:53:40.5003882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.5003952Z outputs = self.mobilebert( 2025-08-14T21:53:40.5004227Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.5004305Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.5004576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.5004652Z layer_outputs = layer_module( 2025-08-14T21:53:40.5004925Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:53:40.5005010Z self_attention_outputs = self.attention( 2025-08-14T21:53:40.5005289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:53:40.5005408Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:53:40.5005688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:53:40.5005809Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.5006086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.5006204Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.5006209Z 2025-08-14T21:53:40.5006286Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5006386Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.5006590Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.5006674Z return mod(**inputs) 2025-08-14T21:53:40.5006968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.5007039Z outputs = self.mobilebert( 2025-08-14T21:53:40.5007322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.5007404Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.5007673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.5007744Z layer_outputs = layer_module( 2025-08-14T21:53:40.5008026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.5008119Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.5008401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.5008511Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.5008780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.5008898Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.5008901Z 2025-08-14T21:53:40.5008979Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5009087Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.5009284Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.5009348Z return mod(**inputs) 2025-08-14T21:53:40.5009626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.5009697Z outputs = self.mobilebert( 2025-08-14T21:53:40.5009968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.5010048Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.5010318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.5010396Z layer_outputs = layer_module( 2025-08-14T21:53:40.5010665Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.5010758Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.5011037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.5011159Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.5011440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.5011557Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.5011829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.5011927Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.5011930Z 2025-08-14T21:53:40.5012006Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5012111Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.5012326Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.5012392Z return mod(**inputs) 2025-08-14T21:53:40.5012665Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.5012753Z outputs = self.mobilebert( 2025-08-14T21:53:40.5013035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.5013116Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.5013401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.5013475Z layer_outputs = layer_module( 2025-08-14T21:53:40.5013745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.5013838Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.5014115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.5014222Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.5014502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.5014611Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.5014614Z 2025-08-14T21:53:40.5014691Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5014800Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.5014991Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.5015056Z return mod(**inputs) 2025-08-14T21:53:40.5015336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.5015405Z outputs = self.mobilebert( 2025-08-14T21:53:40.5015692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.5015766Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.5016032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.5016109Z layer_outputs = layer_module( 2025-08-14T21:53:40.5016376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.5016468Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.5016745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.5016867Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.5017141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.5017258Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.5017526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.5017622Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.5017625Z 2025-08-14T21:53:40.5017703Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5017809Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.5018001Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.5018065Z return mod(**inputs) 2025-08-14T21:53:40.5018371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.5018442Z outputs = self.mobilebert( 2025-08-14T21:53:40.5018715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.5018816Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.5019106Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.5019186Z layer_outputs = layer_module( 2025-08-14T21:53:40.5019468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.5019648Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.5019943Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.5020059Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.5020372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.5020495Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.5020500Z 2025-08-14T21:53:40.5020587Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5020713Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.5020932Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.5021006Z return mod(**inputs) 2025-08-14T21:53:40.5021326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.5021404Z outputs = self.mobilebert( 2025-08-14T21:53:40.5021711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.5021787Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.5022067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.5022149Z layer_outputs = layer_module( 2025-08-14T21:53:40.5022428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.5022531Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.5022813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.5022941Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.5023227Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.5023350Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.5023636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.5023729Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.5023734Z 2025-08-14T21:53:40.5023812Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5023925Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.5024123Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.5024192Z return mod(**inputs) 2025-08-14T21:53:40.5024479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.5024551Z outputs = self.mobilebert( 2025-08-14T21:53:40.5024866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.5024941Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.5025219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.5025328Z layer_outputs = layer_module( 2025-08-14T21:53:40.5025636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:53:40.5025758Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:53:40.5026066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.5026175Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.5026179Z 2025-08-14T21:53:40.5026265Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5026369Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.5026573Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.5026646Z return mod(**inputs) 2025-08-14T21:53:40.5026925Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.5027003Z outputs = self.mobilebert( 2025-08-14T21:53:40.5027282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.5027355Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.5027640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.5027710Z layer_outputs = layer_module( 2025-08-14T21:53:40.5027995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.5028162Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.5028439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:53:40.5028570Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:53:40.5028850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.5028942Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.5028947Z 2025-08-14T21:53:40.5029035Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5029138Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.5029344Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.5029408Z return mod(**inputs) 2025-08-14T21:53:40.5029686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.5029765Z outputs = self.mobilebert( 2025-08-14T21:53:40.5030046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.5030122Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.5030410Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.5030480Z layer_outputs = layer_module( 2025-08-14T21:53:40.5030767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.5030923Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.5031225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:53:40.5031360Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:53:40.5031636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:53:40.5031781Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.5032088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.5032198Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.5032201Z 2025-08-14T21:53:40.5032289Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5032392Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.5032598Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.5032665Z return mod(**inputs) 2025-08-14T21:53:40.5032944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.5033023Z outputs = self.mobilebert( 2025-08-14T21:53:40.5033304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.5033378Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.5033673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.5033743Z layer_outputs = layer_module( 2025-08-14T21:53:40.5034026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:53:40.5034180Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:53:40.5034453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:53:40.5034567Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:53:40.5034837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:53:40.5034932Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:53:40.5035203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.5035291Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.5035295Z 2025-08-14T21:53:40.5035379Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5035456Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5035530Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5035612Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5035688Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5035767Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5035840Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5035913Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5035994Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5036065Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5036166Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.5036365Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.5036428Z return mod(**inputs) 2025-08-14T21:53:40.5036700Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.5036777Z outputs = self.mobilebert( 2025-08-14T21:53:40.5037066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.5037146Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.5037413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.5037536Z layer_outputs = layer_module( 2025-08-14T21:53:40.5037830Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:53:40.5037917Z self_attention_outputs = self.attention( 2025-08-14T21:53:40.5038191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:53:40.5038338Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:53:40.5038609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:53:40.5038738Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.5039008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.5039095Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.5039108Z 2025-08-14T21:53:40.5039185Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5039291Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.5039500Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.5039566Z return mod(**inputs) 2025-08-14T21:53:40.5039855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.5039932Z outputs = self.mobilebert( 2025-08-14T21:53:40.5040205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.5040284Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.5040556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.5040626Z layer_outputs = layer_module( 2025-08-14T21:53:40.5040908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.5041002Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.5041279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.5041398Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.5041676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.5041912Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.5041919Z 2025-08-14T21:53:40.5042003Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5042107Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.5042316Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.5042384Z return mod(**inputs) 2025-08-14T21:53:40.5042676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.5042752Z outputs = self.mobilebert( 2025-08-14T21:53:40.5043048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.5043135Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.5043435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.5043568Z layer_outputs = layer_module( 2025-08-14T21:53:40.5043855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.5043978Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.5044290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.5044417Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.5044697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.5044853Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.5045127Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.5045228Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.5045232Z 2025-08-14T21:53:40.5045311Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5045413Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.5045615Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.5045680Z return mod(**inputs) 2025-08-14T21:53:40.5045953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.5046033Z outputs = self.mobilebert( 2025-08-14T21:53:40.5046305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.5046385Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.5046660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.5046733Z layer_outputs = layer_module( 2025-08-14T21:53:40.5047013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.5047108Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.5047390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.5047501Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.5047774Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.5047893Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.5047896Z 2025-08-14T21:53:40.5047974Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5048075Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.5048277Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.5048341Z return mod(**inputs) 2025-08-14T21:53:40.5048621Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.5048693Z outputs = self.mobilebert( 2025-08-14T21:53:40.5048966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.5049046Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.5049319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.5049388Z layer_outputs = layer_module( 2025-08-14T21:53:40.5049669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.5049791Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.5050074Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.5050214Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.5050509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.5050637Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.5050914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.5051028Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.5051032Z 2025-08-14T21:53:40.5051112Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5051215Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.5051420Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.5051485Z return mod(**inputs) 2025-08-14T21:53:40.5051757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.5051836Z outputs = self.mobilebert( 2025-08-14T21:53:40.5052116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.5052196Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.5052475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.5052543Z layer_outputs = layer_module( 2025-08-14T21:53:40.5052825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.5052919Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.5053200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.5053312Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.5053592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.5053707Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.5053711Z 2025-08-14T21:53:40.5053791Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5053901Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.5054099Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.5054166Z return mod(**inputs) 2025-08-14T21:53:40.5054474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.5054549Z outputs = self.mobilebert( 2025-08-14T21:53:40.5054839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.5054928Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.5055222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.5055304Z layer_outputs = layer_module( 2025-08-14T21:53:40.5055596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.5055696Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.5055994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.5056149Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.5056440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.5056594Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.5056903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.5057009Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.5057012Z 2025-08-14T21:53:40.5057113Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5057224Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.5057441Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.5057509Z return mod(**inputs) 2025-08-14T21:53:40.5057811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.5057886Z outputs = self.mobilebert( 2025-08-14T21:53:40.5058175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.5058263Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.5058554Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.5058628Z layer_outputs = layer_module( 2025-08-14T21:53:40.5058931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:53:40.5059059Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:53:40.5059364Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.5059478Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.5059482Z 2025-08-14T21:53:40.5059632Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5059760Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.5059978Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.5060060Z return mod(**inputs) 2025-08-14T21:53:40.5060362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.5060440Z outputs = self.mobilebert( 2025-08-14T21:53:40.5060751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.5060837Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.5061140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.5061223Z layer_outputs = layer_module( 2025-08-14T21:53:40.5061512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.5061688Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.5061998Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:53:40.5062126Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:53:40.5062421Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.5062512Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.5062516Z 2025-08-14T21:53:40.5062604Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5062736Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.5062934Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.5063011Z return mod(**inputs) 2025-08-14T21:53:40.5063310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.5063398Z outputs = self.mobilebert( 2025-08-14T21:53:40.5063684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.5063776Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.5064076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.5064151Z layer_outputs = layer_module( 2025-08-14T21:53:40.5064441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.5064615Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.5064890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:53:40.5065019Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:53:40.5065294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:53:40.5065416Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.5065699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.5065788Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.5065791Z 2025-08-14T21:53:40.5065879Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5065983Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.5066178Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.5066252Z return mod(**inputs) 2025-08-14T21:53:40.5066529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.5066599Z outputs = self.mobilebert( 2025-08-14T21:53:40.5066888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.5066965Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.5067261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.5067334Z layer_outputs = layer_module( 2025-08-14T21:53:40.5067634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:53:40.5067801Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:53:40.5068079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:53:40.5068197Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:53:40.5068472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:53:40.5068562Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:53:40.5068844Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.5068933Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.5068936Z 2025-08-14T21:53:40.5069037Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5069122Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5069198Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5069280Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5069376Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5069450Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5069531Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5069625Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5069702Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5069784Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5069916Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.5070112Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.5070185Z return mod(**inputs) 2025-08-14T21:53:40.5070465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.5070544Z outputs = self.mobilebert( 2025-08-14T21:53:40.5070816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.5070890Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.5071175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.5071246Z layer_outputs = layer_module( 2025-08-14T21:53:40.5071530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:53:40.5071619Z self_attention_outputs = self.attention( 2025-08-14T21:53:40.5071895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:53:40.5072027Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:53:40.5072303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:53:40.5072426Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.5072711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.5072801Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.5072804Z 2025-08-14T21:53:40.5072889Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5072994Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.5073189Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.5073265Z return mod(**inputs) 2025-08-14T21:53:40.5073549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.5073627Z outputs = self.mobilebert( 2025-08-14T21:53:40.5073899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.5073973Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.5074258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.5074327Z layer_outputs = layer_module( 2025-08-14T21:53:40.5074608Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.5074712Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.5074987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.5075127Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.5075406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.5075515Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.5075536Z 2025-08-14T21:53:40.5075624Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5075741Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.5075947Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.5076032Z return mod(**inputs) 2025-08-14T21:53:40.5076311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.5076389Z outputs = self.mobilebert( 2025-08-14T21:53:40.5076672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.5076745Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.5077034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.5077105Z layer_outputs = layer_module( 2025-08-14T21:53:40.5077392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.5077487Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.5077770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.5077906Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.5078188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.5078319Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.5078596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.5078688Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.5078691Z 2025-08-14T21:53:40.5078778Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5078882Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.5079081Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.5079155Z return mod(**inputs) 2025-08-14T21:53:40.5079436Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.5079513Z outputs = self.mobilebert( 2025-08-14T21:53:40.5079794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.5079867Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.5080155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.5080227Z layer_outputs = layer_module( 2025-08-14T21:53:40.5080512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.5080607Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.5080883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.5081004Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.5081288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.5081415Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.5081427Z 2025-08-14T21:53:40.5081508Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5081609Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.5081837Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.5081902Z return mod(**inputs) 2025-08-14T21:53:40.5082195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.5082276Z outputs = self.mobilebert( 2025-08-14T21:53:40.5082569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.5082647Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.5082927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.5082998Z layer_outputs = layer_module( 2025-08-14T21:53:40.5083276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.5083369Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.5083641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.5083771Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.5084048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.5084178Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.5084457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.5084550Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.5084554Z 2025-08-14T21:53:40.5084641Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5084742Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.5084950Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.5085015Z return mod(**inputs) 2025-08-14T21:53:40.5085292Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.5085371Z outputs = self.mobilebert( 2025-08-14T21:53:40.5085648Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.5085721Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.5086004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.5086074Z layer_outputs = layer_module( 2025-08-14T21:53:40.5086412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.5086506Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.5086783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:53:40.5086903Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:53:40.5087181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.5087301Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.5087305Z 2025-08-14T21:53:40.5087385Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5087488Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.5087711Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.5087779Z return mod(**inputs) 2025-08-14T21:53:40.5088058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.5088155Z outputs = self.mobilebert( 2025-08-14T21:53:40.5088455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.5088536Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.5088830Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.5088900Z layer_outputs = layer_module( 2025-08-14T21:53:40.5089184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:53:40.5089279Z attention_output = ffn_module(attention_output) 2025-08-14T21:53:40.5089560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:53:40.5089691Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:53:40.5089969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:53:40.5090097Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.5090375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.5090465Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.5090476Z 2025-08-14T21:53:40.5090555Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5090659Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.5090863Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.5090929Z return mod(**inputs) 2025-08-14T21:53:40.5091205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.5091286Z outputs = self.mobilebert( 2025-08-14T21:53:40.5091567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.5091640Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.5091931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.5092002Z layer_outputs = layer_module( 2025-08-14T21:53:40.5092290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:53:40.5092410Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:53:40.5092689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:53:40.5092807Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:53:40.5092810Z 2025-08-14T21:53:40.5092891Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5093000Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.5093196Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.5093264Z return mod(**inputs) 2025-08-14T21:53:40.5093548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.5093618Z outputs = self.mobilebert( 2025-08-14T21:53:40.5093927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.5094011Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.5094288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.5094388Z layer_outputs = layer_module( 2025-08-14T21:53:40.5094693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.5094851Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.5095171Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:53:40.5095298Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:53:40.5095600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.5095696Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.5095700Z 2025-08-14T21:53:40.5095781Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5095898Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.5096104Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.5096182Z return mod(**inputs) 2025-08-14T21:53:40.5096471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-08-14T21:53:40.5096548Z outputs = self.mobilebert( 2025-08-14T21:53:40.5096844Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:53:40.5096920Z encoder_outputs = self.encoder( 2025-08-14T21:53:40.5097211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:53:40.5097291Z layer_outputs = layer_module( 2025-08-14T21:53:40.5097580Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:53:40.5097755Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:53:40.5098046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:53:40.5098175Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:53:40.5098477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:53:40.5098602Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:53:40.5098908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:53:40.5099003Z return input_tensor * self.weight + self.bias 2025-08-14T21:53:40.5099006Z 2025-08-14T21:53:40.5099088Z cudagraph partition due to non gpu ops 2025-08-14T21:53:40.5099205Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.5099414Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.5099482Z return mod(**inputs) 2025-08-14T21:53:40.5099864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 989, in forward 2025-08-14T21:53:40.5099970Z prediction_scores = self.cls(sequence_output) 2025-08-14T21:53:40.5100271Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 643, in forward 2025-08-14T21:53:40.5100422Z prediction_scores = self.predictions(sequence_output) 2025-08-14T21:53:40.5100725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 632, in forward 2025-08-14T21:53:40.5100958Z hidden_states = hidden_states.matmul(torch.cat([self.decoder.weight.t(), self.dense.weight], dim=0)) 2025-08-14T21:53:40.5100984Z 2025-08-14T21:53:40.5101099Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.5101345Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.5101420Z return mod(**inputs) 2025-08-14T21:53:40.5101755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 994, in forward 2025-08-14T21:53:40.5101964Z masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1)) 2025-08-14T21:53:40.5101968Z 2025-08-14T21:53:40.5102077Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:53:40.5102295Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:53:40.5102365Z return mod(**inputs) 2025-08-14T21:53:40.5102658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 989, in forward 2025-08-14T21:53:40.5102763Z prediction_scores = self.cls(sequence_output) 2025-08-14T21:53:40.5103053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 643, in forward 2025-08-14T21:53:40.5103170Z prediction_scores = self.predictions(sequence_output) 2025-08-14T21:53:40.5103472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 633, in forward 2025-08-14T21:53:40.5103555Z hidden_states += self.decoder.bias 2025-08-14T21:53:40.5103558Z 2025-08-14T21:53:55.3891033Z Compilation time (from dynamo_timed): 48.799755939 2025-08-14T21:53:55.3891401Z pass 2025-08-14T21:53:55.3891709Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:53:55.3896911Z TIMING: _recursive_pre_grad_passes:0.17072 _recursive_joint_graph_passes:1.43207 _recursive_post_grad_passes:0.21618 async_compile.wait:0.8671 code_gen:11.20822 inductor_compile:16.66758 backend_compile:35.53156 gc:0.00019 entire_frame_compile:48.79976 total_wall_time:48.79976 2025-08-14T21:53:55.3897980Z STATS: call_* op count: 1449 | FakeTensorMode.__torch_dispatch__:103338 | FakeTensor.__torch_dispatch__:12500 | ProxyTorchDispatchMode.__torch_dispatch__:23208 2025-08-14T21:53:55.3898564Z Dynamo produced 1 graphs covering 1449 ops with 0 graph breaks (0 unique) 2025-08-14T21:54:01.7893567Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:54:01.7894502Z from pkg_resources import resource_filename 2025-08-14T21:54:02.4988797Z 2025-08-14T21:54:03.1078615Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:54:03.1079070Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:54:03.1151038Z cpu eval MobileBertForQuestionAnswering 2025-08-14T21:54:03.3256920Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:54:03.4697798Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:54:03.6147100Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:54:38.6477706Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6480885Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6481447Z return mod(**inputs) 2025-08-14T21:54:38.6482393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6482971Z outputs = self.mobilebert( 2025-08-14T21:54:38.6483644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 791, in forward 2025-08-14T21:54:38.6491370Z embedding_output = self.embeddings( 2025-08-14T21:54:38.6491947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 199, in forward 2025-08-14T21:54:38.6492504Z inputs_embeds = torch.cat( 2025-08-14T21:54:38.6492656Z 2025-08-14T21:54:38.6498895Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6499190Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6499760Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6500153Z return mod(**inputs) 2025-08-14T21:54:38.6500634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6501204Z outputs = self.mobilebert( 2025-08-14T21:54:38.6501670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 791, in forward 2025-08-14T21:54:38.6502125Z embedding_output = self.embeddings( 2025-08-14T21:54:38.6502574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 215, in forward 2025-08-14T21:54:38.6503027Z embeddings = self.LayerNorm(embeddings) 2025-08-14T21:54:38.6503470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6503928Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6504095Z 2025-08-14T21:54:38.6504187Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6504446Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6505616Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6506186Z return mod(**inputs) 2025-08-14T21:54:38.6506691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6507191Z outputs = self.mobilebert( 2025-08-14T21:54:38.6507648Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6508301Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6508966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6509615Z layer_outputs = layer_module( 2025-08-14T21:54:38.6510261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:54:38.6511125Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:54:38.6511940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:54:38.6512512Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:54:38.6512993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:54:38.6513451Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:54:38.6513909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6514369Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6514541Z 2025-08-14T21:54:38.6514802Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6515048Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6515275Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6515565Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6515838Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6516050Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6516309Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6516534Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6516753Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6516963Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6517249Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6517650Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6518001Z return mod(**inputs) 2025-08-14T21:54:38.6518581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6519193Z outputs = self.mobilebert( 2025-08-14T21:54:38.6519727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6520227Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6520824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6521383Z layer_outputs = layer_module( 2025-08-14T21:54:38.6522010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:54:38.6522538Z self_attention_outputs = self.attention( 2025-08-14T21:54:38.6523000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:54:38.6523526Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:54:38.6524052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:54:38.6524566Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.6525172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6525714Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6525880Z 2025-08-14T21:54:38.6525967Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6526223Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6526612Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6526956Z return mod(**inputs) 2025-08-14T21:54:38.6527385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6527846Z outputs = self.mobilebert( 2025-08-14T21:54:38.6528298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6528837Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6529348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6529825Z layer_outputs = layer_module( 2025-08-14T21:54:38.6530280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.6530766Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.6531252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.6531795Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.6532296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.6532893Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.6533104Z 2025-08-14T21:54:38.6533238Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6533547Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6533950Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6535472Z return mod(**inputs) 2025-08-14T21:54:38.6535930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6536468Z outputs = self.mobilebert( 2025-08-14T21:54:38.6537020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6537492Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6538028Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6538594Z layer_outputs = layer_module( 2025-08-14T21:54:38.6539079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.6539916Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.6540513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.6541042Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.6541567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.6542479Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.6542988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6543461Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6543640Z 2025-08-14T21:54:38.6543777Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6544092Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6544518Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6544947Z return mod(**inputs) 2025-08-14T21:54:38.6545380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6545835Z outputs = self.mobilebert( 2025-08-14T21:54:38.6546271Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6546722Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6547170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6547614Z layer_outputs = layer_module( 2025-08-14T21:54:38.6548068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.6548541Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.6549010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.6549503Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.6550129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.6550835Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.6551038Z 2025-08-14T21:54:38.6551130Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6551394Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6551843Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6552210Z return mod(**inputs) 2025-08-14T21:54:38.6552702Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6553191Z outputs = self.mobilebert( 2025-08-14T21:54:38.6553750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6554225Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6554768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6555209Z layer_outputs = layer_module( 2025-08-14T21:54:38.6555661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.6556138Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.6556608Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.6557160Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.6557740Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.6558353Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.6558868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6559319Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6559487Z 2025-08-14T21:54:38.6559574Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6559832Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6560208Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6560562Z return mod(**inputs) 2025-08-14T21:54:38.6560981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6561421Z outputs = self.mobilebert( 2025-08-14T21:54:38.6561840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6562278Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6562711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6563138Z layer_outputs = layer_module( 2025-08-14T21:54:38.6563667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.6564186Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.6564688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.6565172Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.6565752Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.6566370Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.6566544Z 2025-08-14T21:54:38.6566640Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6567051Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6567543Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6567893Z return mod(**inputs) 2025-08-14T21:54:38.6568419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6568897Z outputs = self.mobilebert( 2025-08-14T21:54:38.6569347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6569788Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6570252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6570693Z layer_outputs = layer_module( 2025-08-14T21:54:38.6571122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.6571579Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.6572148Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.6572735Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.6573242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.6573723Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.6574239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6574725Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6574887Z 2025-08-14T21:54:38.6574985Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6575249Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6575650Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6576007Z return mod(**inputs) 2025-08-14T21:54:38.6576432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6576992Z outputs = self.mobilebert( 2025-08-14T21:54:38.6577542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6578001Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6578442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6578893Z layer_outputs = layer_module( 2025-08-14T21:54:38.6579342Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:54:38.6579959Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:54:38.6580457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.6580964Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.6581148Z 2025-08-14T21:54:38.6581249Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6581518Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6581911Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6582276Z return mod(**inputs) 2025-08-14T21:54:38.6582709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6583158Z outputs = self.mobilebert( 2025-08-14T21:54:38.6583789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6584309Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6584787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6585253Z layer_outputs = layer_module( 2025-08-14T21:54:38.6585717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.6586265Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.6586867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:54:38.6587483Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:54:38.6588105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6588684Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6588849Z 2025-08-14T21:54:38.6588948Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6589304Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6589701Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6590055Z return mod(**inputs) 2025-08-14T21:54:38.6590476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6590932Z outputs = self.mobilebert( 2025-08-14T21:54:38.6591371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6591830Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6592301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6592734Z layer_outputs = layer_module( 2025-08-14T21:54:38.6593180Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.6593718Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.6594250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:54:38.6594761Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:54:38.6595271Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:54:38.6595874Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.6596412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6596928Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6597082Z 2025-08-14T21:54:38.6597180Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6597427Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6597816Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6598159Z return mod(**inputs) 2025-08-14T21:54:38.6598566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6599008Z outputs = self.mobilebert( 2025-08-14T21:54:38.6599434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6599986Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6600417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6600966Z layer_outputs = layer_module( 2025-08-14T21:54:38.6601516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:54:38.6602089Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:54:38.6602742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:54:38.6603292Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:54:38.6603827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:54:38.6604375Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:54:38.6604828Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6605283Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6605439Z 2025-08-14T21:54:38.6605533Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6605789Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6606059Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6606285Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6606511Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6606802Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6607023Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6607232Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6607451Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6607709Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6608005Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6608391Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6608829Z return mod(**inputs) 2025-08-14T21:54:38.6609246Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6609690Z outputs = self.mobilebert( 2025-08-14T21:54:38.6610121Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6610559Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6610996Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6611447Z layer_outputs = layer_module( 2025-08-14T21:54:38.6611992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:54:38.6612511Z self_attention_outputs = self.attention( 2025-08-14T21:54:38.6613018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:54:38.6613501Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:54:38.6613990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:54:38.6614572Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.6615090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6615647Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6615816Z 2025-08-14T21:54:38.6615904Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6616185Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6616667Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6617091Z return mod(**inputs) 2025-08-14T21:54:38.6617548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6618156Z outputs = self.mobilebert( 2025-08-14T21:54:38.6618679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6619248Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6619827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6620384Z layer_outputs = layer_module( 2025-08-14T21:54:38.6620874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.6621416Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.6621889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.6622380Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.6622874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.6623370Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.6623555Z 2025-08-14T21:54:38.6623655Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6623908Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6624330Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6624686Z return mod(**inputs) 2025-08-14T21:54:38.6625112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6625673Z outputs = self.mobilebert( 2025-08-14T21:54:38.6626223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6626672Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6627108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6627558Z layer_outputs = layer_module( 2025-08-14T21:54:38.6628004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.6628469Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.6628940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.6629445Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.6630043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.6631261Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.6631767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6632244Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6632410Z 2025-08-14T21:54:38.6632507Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6632762Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6633167Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6633531Z return mod(**inputs) 2025-08-14T21:54:38.6634111Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6634564Z outputs = self.mobilebert( 2025-08-14T21:54:38.6635108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6635574Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6636038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6636501Z layer_outputs = layer_module( 2025-08-14T21:54:38.6636960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.6637434Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.6638002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.6638609Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.6639124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.6639739Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.6639920Z 2025-08-14T21:54:38.6640010Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6640270Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6640661Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6641006Z return mod(**inputs) 2025-08-14T21:54:38.6641445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6642003Z outputs = self.mobilebert( 2025-08-14T21:54:38.6642435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6642870Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6643313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6643875Z layer_outputs = layer_module( 2025-08-14T21:54:38.6644403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.6644886Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.6645347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.6645837Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.6646331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.6646833Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.6647426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6647914Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6648153Z 2025-08-14T21:54:38.6648259Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6648523Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6648921Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6649276Z return mod(**inputs) 2025-08-14T21:54:38.6649706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6650158Z outputs = self.mobilebert( 2025-08-14T21:54:38.6650686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6651146Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6651589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6652069Z layer_outputs = layer_module( 2025-08-14T21:54:38.6652546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.6653101Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.6653607Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.6654208Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.6654701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.6655189Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.6655374Z 2025-08-14T21:54:38.6655461Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6655720Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6656195Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6656563Z return mod(**inputs) 2025-08-14T21:54:38.6657102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6657647Z outputs = self.mobilebert( 2025-08-14T21:54:38.6658091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6658645Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6659100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6659622Z layer_outputs = layer_module( 2025-08-14T21:54:38.6660087Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.6660570Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.6661051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.6661552Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.6662065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.6662573Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.6663078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6663644Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6663821Z 2025-08-14T21:54:38.6663913Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6664271Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6664667Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6665021Z return mod(**inputs) 2025-08-14T21:54:38.6665451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6665899Z outputs = self.mobilebert( 2025-08-14T21:54:38.6666323Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6666771Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6667245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6667706Z layer_outputs = layer_module( 2025-08-14T21:54:38.6668145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:54:38.6668654Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:54:38.6669159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.6669631Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.6669837Z 2025-08-14T21:54:38.6669923Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6670174Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6670555Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6670891Z return mod(**inputs) 2025-08-14T21:54:38.6671308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6671741Z outputs = self.mobilebert( 2025-08-14T21:54:38.6672159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6672606Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6673045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6673479Z layer_outputs = layer_module( 2025-08-14T21:54:38.6673897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.6674426Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.6674968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:54:38.6675455Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:54:38.6675931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6676389Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6676546Z 2025-08-14T21:54:38.6676639Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6676888Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6677273Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6677620Z return mod(**inputs) 2025-08-14T21:54:38.6678034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6678466Z outputs = self.mobilebert( 2025-08-14T21:54:38.6678888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6679330Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6679761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6680191Z layer_outputs = layer_module( 2025-08-14T21:54:38.6680616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.6681142Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.6681666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:54:38.6682152Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:54:38.6682665Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:54:38.6683148Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.6683683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6684170Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6684335Z 2025-08-14T21:54:38.6684421Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6684676Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6685077Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6685424Z return mod(**inputs) 2025-08-14T21:54:38.6685840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6686274Z outputs = self.mobilebert( 2025-08-14T21:54:38.6686692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6687112Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6687518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6687945Z layer_outputs = layer_module( 2025-08-14T21:54:38.6688378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:54:38.6688910Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:54:38.6689440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:54:38.6689907Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:54:38.6690378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:54:38.6690830Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:54:38.6691272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6691729Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6691893Z 2025-08-14T21:54:38.6691979Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6692202Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6692417Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6692639Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6692855Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6693065Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6693284Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6693505Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6693716Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6693937Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6694192Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6694580Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6694919Z return mod(**inputs) 2025-08-14T21:54:38.6695341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6695806Z outputs = self.mobilebert( 2025-08-14T21:54:38.6696233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6696672Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6697133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6697570Z layer_outputs = layer_module( 2025-08-14T21:54:38.6697989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:54:38.6698467Z self_attention_outputs = self.attention( 2025-08-14T21:54:38.6698931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:54:38.6699422Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:54:38.6700040Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:54:38.6700552Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.6701065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6701522Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6701677Z 2025-08-14T21:54:38.6701763Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6702019Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6702400Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6702739Z return mod(**inputs) 2025-08-14T21:54:38.6703151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6703578Z outputs = self.mobilebert( 2025-08-14T21:54:38.6703997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6704426Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6704856Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6705292Z layer_outputs = layer_module( 2025-08-14T21:54:38.6705709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.6706173Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.6706632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.6707105Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.6707571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.6708049Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.6708226Z 2025-08-14T21:54:38.6708318Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6708567Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6708941Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6709284Z return mod(**inputs) 2025-08-14T21:54:38.6709696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6710127Z outputs = self.mobilebert( 2025-08-14T21:54:38.6710549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6710991Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6711424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6711850Z layer_outputs = layer_module( 2025-08-14T21:54:38.6712303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.6712764Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.6713213Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.6713722Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.6714231Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.6714690Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.6715148Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6715571Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6715725Z 2025-08-14T21:54:38.6715803Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6716041Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6716388Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6716706Z return mod(**inputs) 2025-08-14T21:54:38.6717092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6717489Z outputs = self.mobilebert( 2025-08-14T21:54:38.6717883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6718290Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6718685Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6719076Z layer_outputs = layer_module( 2025-08-14T21:54:38.6719468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.6719887Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.6720305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.6720733Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.6721167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.6721606Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.6721768Z 2025-08-14T21:54:38.6721854Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6722079Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6722429Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6722745Z return mod(**inputs) 2025-08-14T21:54:38.6723118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6723521Z outputs = self.mobilebert( 2025-08-14T21:54:38.6723910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6724312Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6724699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6725100Z layer_outputs = layer_module( 2025-08-14T21:54:38.6725492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.6725904Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.6726346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.6726794Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.6727239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.6727698Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.6728169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6728591Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6728759Z 2025-08-14T21:54:38.6728844Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6729067Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6729416Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6729736Z return mod(**inputs) 2025-08-14T21:54:38.6730110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6730516Z outputs = self.mobilebert( 2025-08-14T21:54:38.6730904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6731310Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6731701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6732102Z layer_outputs = layer_module( 2025-08-14T21:54:38.6732495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.6732913Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.6733327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.6733765Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.6734203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.6734634Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.6734803Z 2025-08-14T21:54:38.6734882Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6735118Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6735472Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6735784Z return mod(**inputs) 2025-08-14T21:54:38.6736164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6736566Z outputs = self.mobilebert( 2025-08-14T21:54:38.6736945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6737346Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6737787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6738197Z layer_outputs = layer_module( 2025-08-14T21:54:38.6738596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.6739031Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.6739470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.6740007Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.6740523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.6741033Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.6741538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6742245Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6742408Z 2025-08-14T21:54:38.6742571Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6742828Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6743212Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6743583Z return mod(**inputs) 2025-08-14T21:54:38.6743978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6744389Z outputs = self.mobilebert( 2025-08-14T21:54:38.6744788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6745194Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6745607Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6746018Z layer_outputs = layer_module( 2025-08-14T21:54:38.6746415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:54:38.6746870Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:54:38.6747323Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.6747773Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.6747935Z 2025-08-14T21:54:38.6748014Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6748251Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6748626Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6748951Z return mod(**inputs) 2025-08-14T21:54:38.6749334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6749748Z outputs = self.mobilebert( 2025-08-14T21:54:38.6750144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6750546Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6750950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6751359Z layer_outputs = layer_module( 2025-08-14T21:54:38.6751766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.6752255Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.6752749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:54:38.6753209Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:54:38.6753667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6754092Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6754246Z 2025-08-14T21:54:38.6754326Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6754563Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6754913Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6755273Z return mod(**inputs) 2025-08-14T21:54:38.6755672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6756086Z outputs = self.mobilebert( 2025-08-14T21:54:38.6756504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6756931Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6757345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6757771Z layer_outputs = layer_module( 2025-08-14T21:54:38.6758177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.6758668Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.6759166Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:54:38.6759612Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:54:38.6760072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:54:38.6760529Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.6760986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6761419Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6761569Z 2025-08-14T21:54:38.6761648Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6761887Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6762252Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6762571Z return mod(**inputs) 2025-08-14T21:54:38.6762962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6763372Z outputs = self.mobilebert( 2025-08-14T21:54:38.6763762Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6764172Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6764574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6764986Z layer_outputs = layer_module( 2025-08-14T21:54:38.6765380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:54:38.6765877Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:54:38.6766376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:54:38.6766820Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:54:38.6767259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:54:38.6767690Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:54:38.6768121Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6768554Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6768709Z 2025-08-14T21:54:38.6768790Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6769004Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6769213Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6769448Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6769658Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6769866Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6770062Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6770292Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6770500Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6770717Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6770957Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6771324Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6771677Z return mod(**inputs) 2025-08-14T21:54:38.6772065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6772484Z outputs = self.mobilebert( 2025-08-14T21:54:38.6772900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6773308Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6773732Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6774148Z layer_outputs = layer_module( 2025-08-14T21:54:38.6774558Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:54:38.6774983Z self_attention_outputs = self.attention( 2025-08-14T21:54:38.6775408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:54:38.6775875Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:54:38.6776338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:54:38.6776799Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.6777260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6777694Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6777842Z 2025-08-14T21:54:38.6777923Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6778162Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6778531Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6778860Z return mod(**inputs) 2025-08-14T21:54:38.6779245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6779756Z outputs = self.mobilebert( 2025-08-14T21:54:38.6780201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6780686Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6781157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6781604Z layer_outputs = layer_module( 2025-08-14T21:54:38.6782032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.6782570Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.6783126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.6783603Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.6784105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.6784581Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.6784772Z 2025-08-14T21:54:38.6784866Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6785109Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6785483Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6785830Z return mod(**inputs) 2025-08-14T21:54:38.6786222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6786668Z outputs = self.mobilebert( 2025-08-14T21:54:38.6787067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6787490Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6787910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6788333Z layer_outputs = layer_module( 2025-08-14T21:54:38.6788745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.6789195Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.6789640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.6790101Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.6790576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.6791041Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.6791510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6791965Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6792124Z 2025-08-14T21:54:38.6792208Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6792455Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6792829Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6793160Z return mod(**inputs) 2025-08-14T21:54:38.6793561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6793990Z outputs = self.mobilebert( 2025-08-14T21:54:38.6794394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6794818Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6795235Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6795651Z layer_outputs = layer_module( 2025-08-14T21:54:38.6796057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.6796508Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.6796960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.6797406Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.6797848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.6798291Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.6798455Z 2025-08-14T21:54:38.6798543Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6798818Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6799171Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6799484Z return mod(**inputs) 2025-08-14T21:54:38.6799890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6800317Z outputs = self.mobilebert( 2025-08-14T21:54:38.6800717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6801147Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6801544Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6801960Z layer_outputs = layer_module( 2025-08-14T21:54:38.6802362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.6802792Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.6803212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.6803679Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.6804131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.6804578Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.6805021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6805439Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6805584Z 2025-08-14T21:54:38.6805670Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6805905Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6806273Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6806601Z return mod(**inputs) 2025-08-14T21:54:38.6806997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6807404Z outputs = self.mobilebert( 2025-08-14T21:54:38.6807808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6808222Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6808631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6809037Z layer_outputs = layer_module( 2025-08-14T21:54:38.6809441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.6809872Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.6810302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.6810768Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.6811207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.6811660Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.6811828Z 2025-08-14T21:54:38.6811909Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6812153Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6812516Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6812840Z return mod(**inputs) 2025-08-14T21:54:38.6813254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6813669Z outputs = self.mobilebert( 2025-08-14T21:54:38.6814065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6814495Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6814925Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6815408Z layer_outputs = layer_module( 2025-08-14T21:54:38.6815820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.6816252Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.6816697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.6817185Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.6817688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.6818172Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.6818671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6819134Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6819297Z 2025-08-14T21:54:38.6819382Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6819736Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6820137Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6820518Z return mod(**inputs) 2025-08-14T21:54:38.6820929Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6821344Z outputs = self.mobilebert( 2025-08-14T21:54:38.6821747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6822157Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6822570Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6822988Z layer_outputs = layer_module( 2025-08-14T21:54:38.6823397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:54:38.6823851Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:54:38.6824314Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.6824779Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.6824944Z 2025-08-14T21:54:38.6825033Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6825270Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6825638Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6825966Z return mod(**inputs) 2025-08-14T21:54:38.6826351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6826771Z outputs = self.mobilebert( 2025-08-14T21:54:38.6827174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6827593Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6828021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6828442Z layer_outputs = layer_module( 2025-08-14T21:54:38.6828845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.6829375Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.6829865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:54:38.6830388Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:54:38.6830852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6831281Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6831438Z 2025-08-14T21:54:38.6831519Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6831758Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6832124Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6832444Z return mod(**inputs) 2025-08-14T21:54:38.6832835Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6833243Z outputs = self.mobilebert( 2025-08-14T21:54:38.6833633Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6834056Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6834468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6834882Z layer_outputs = layer_module( 2025-08-14T21:54:38.6835283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.6835784Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.6836289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:54:38.6836755Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:54:38.6837214Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:54:38.6837676Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.6838139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6838584Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6838734Z 2025-08-14T21:54:38.6838814Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6839054Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6839417Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6839739Z return mod(**inputs) 2025-08-14T21:54:38.6840136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6840557Z outputs = self.mobilebert( 2025-08-14T21:54:38.6840959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6841370Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6841946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6842455Z layer_outputs = layer_module( 2025-08-14T21:54:38.6842857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:54:38.6843361Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:54:38.6843924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:54:38.6844398Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:54:38.6844867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:54:38.6845354Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:54:38.6845789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6846213Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6846358Z 2025-08-14T21:54:38.6846437Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6846648Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6846852Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6847051Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6847256Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6847461Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6847662Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6847870Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6848072Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6848272Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6848493Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6848847Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6849169Z return mod(**inputs) 2025-08-14T21:54:38.6849545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6849954Z outputs = self.mobilebert( 2025-08-14T21:54:38.6850345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6850748Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6851138Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6851539Z layer_outputs = layer_module( 2025-08-14T21:54:38.6851938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:54:38.6852358Z self_attention_outputs = self.attention( 2025-08-14T21:54:38.6852767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:54:38.6853220Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:54:38.6853671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:54:38.6854116Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.6854565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6854995Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6855143Z 2025-08-14T21:54:38.6855230Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6855458Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6855821Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6856145Z return mod(**inputs) 2025-08-14T21:54:38.6856552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6856964Z outputs = self.mobilebert( 2025-08-14T21:54:38.6857358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6857795Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6858208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6858628Z layer_outputs = layer_module( 2025-08-14T21:54:38.6859052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.6859563Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.6860023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.6860499Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.6860964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.6861421Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.6861583Z 2025-08-14T21:54:38.6861662Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6861900Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6862267Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6862598Z return mod(**inputs) 2025-08-14T21:54:38.6862982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6863399Z outputs = self.mobilebert( 2025-08-14T21:54:38.6863800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6864205Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6864610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6865027Z layer_outputs = layer_module( 2025-08-14T21:54:38.6865421Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.6865854Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.6866285Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.6866753Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.6867210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.6867669Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.6868127Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6868569Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6868716Z 2025-08-14T21:54:38.6868798Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6869038Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6869403Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6869722Z return mod(**inputs) 2025-08-14T21:54:38.6870118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6870533Z outputs = self.mobilebert( 2025-08-14T21:54:38.6870964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6871374Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6871789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6872217Z layer_outputs = layer_module( 2025-08-14T21:54:38.6872639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.6873068Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.6873519Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.6873967Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.6874407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.6874852Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.6875025Z 2025-08-14T21:54:38.6875105Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6875343Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6875694Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6876019Z return mod(**inputs) 2025-08-14T21:54:38.6876409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6876817Z outputs = self.mobilebert( 2025-08-14T21:54:38.6877216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6877624Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6878031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6878432Z layer_outputs = layer_module( 2025-08-14T21:54:38.6878827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.6879257Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.6879688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.6880141Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.6880602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.6881059Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.6881524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6881934Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6882084Z 2025-08-14T21:54:38.6882161Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6882397Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6882752Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6883081Z return mod(**inputs) 2025-08-14T21:54:38.6883472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6883884Z outputs = self.mobilebert( 2025-08-14T21:54:38.6884273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6884685Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6885113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6885524Z layer_outputs = layer_module( 2025-08-14T21:54:38.6885929Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.6886382Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.6886835Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.6887281Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.6887769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.6888237Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.6888410Z 2025-08-14T21:54:38.6888502Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6888747Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6889127Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6889471Z return mod(**inputs) 2025-08-14T21:54:38.6889878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6890320Z outputs = self.mobilebert( 2025-08-14T21:54:38.6890714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6891122Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6891517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6891928Z layer_outputs = layer_module( 2025-08-14T21:54:38.6892329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.6892761Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.6893180Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.6893637Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.6894094Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.6894539Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.6894994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6895432Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6895579Z 2025-08-14T21:54:38.6895666Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6895895Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6896256Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6896584Z return mod(**inputs) 2025-08-14T21:54:38.6896966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6897379Z outputs = self.mobilebert( 2025-08-14T21:54:38.6897776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6898205Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6898622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6899054Z layer_outputs = layer_module( 2025-08-14T21:54:38.6899570Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:54:38.6900071Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:54:38.6900544Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.6901044Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.6901253Z 2025-08-14T21:54:38.6901350Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6901599Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6902005Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6902336Z return mod(**inputs) 2025-08-14T21:54:38.6902738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6903156Z outputs = self.mobilebert( 2025-08-14T21:54:38.6903564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6903983Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6904401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6904817Z layer_outputs = layer_module( 2025-08-14T21:54:38.6905228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.6905732Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.6906229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:54:38.6906696Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:54:38.6907162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6907600Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6907753Z 2025-08-14T21:54:38.6907839Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6908086Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6908466Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6908792Z return mod(**inputs) 2025-08-14T21:54:38.6909173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6909597Z outputs = self.mobilebert( 2025-08-14T21:54:38.6909992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6910395Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6910798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6911205Z layer_outputs = layer_module( 2025-08-14T21:54:38.6911610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.6912090Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.6912584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:54:38.6913040Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:54:38.6913493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:54:38.6913962Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.6914410Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6914824Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6914989Z 2025-08-14T21:54:38.6915075Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6915299Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6915665Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6915981Z return mod(**inputs) 2025-08-14T21:54:38.6916376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6916781Z outputs = self.mobilebert( 2025-08-14T21:54:38.6917181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6917593Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6917995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6918408Z layer_outputs = layer_module( 2025-08-14T21:54:38.6918810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:54:38.6919302Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:54:38.6919804Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:54:38.6920250Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:54:38.6920686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:54:38.6921135Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:54:38.6921548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6921973Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6922116Z 2025-08-14T21:54:38.6922203Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6922406Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6922614Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6922822Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6923020Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6923226Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6923432Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6923627Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6923831Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6924037Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6924273Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6924477Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6924544Z return mod(**inputs) 2025-08-14T21:54:38.6924837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6924910Z outputs = self.mobilebert( 2025-08-14T21:54:38.6925184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6925268Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6925542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6925624Z layer_outputs = layer_module( 2025-08-14T21:54:38.6925959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:54:38.6926050Z self_attention_outputs = self.attention( 2025-08-14T21:54:38.6926333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:54:38.6926478Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:54:38.6926771Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:54:38.6926908Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.6927203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6927303Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6927307Z 2025-08-14T21:54:38.6927383Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6927488Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6927694Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6927758Z return mod(**inputs) 2025-08-14T21:54:38.6928049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6928122Z outputs = self.mobilebert( 2025-08-14T21:54:38.6928404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6928488Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6928763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6928834Z layer_outputs = layer_module( 2025-08-14T21:54:38.6929119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.6929216Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.6929500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.6929616Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.6929912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.6930035Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.6930040Z 2025-08-14T21:54:38.6930123Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6930238Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6930455Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6930524Z return mod(**inputs) 2025-08-14T21:54:38.6930829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6930904Z outputs = self.mobilebert( 2025-08-14T21:54:38.6931196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6931282Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6931582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6931663Z layer_outputs = layer_module( 2025-08-14T21:54:38.6931943Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.6932038Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.6932349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.6932477Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.6932763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.6932903Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.6933201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6933303Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6933323Z 2025-08-14T21:54:38.6933403Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6933513Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6933713Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6933779Z return mod(**inputs) 2025-08-14T21:54:38.6934066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6934139Z outputs = self.mobilebert( 2025-08-14T21:54:38.6934416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6934500Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6934774Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6934855Z layer_outputs = layer_module( 2025-08-14T21:54:38.6935131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.6935225Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.6935510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.6935620Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.6935894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.6936013Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.6936017Z 2025-08-14T21:54:38.6936098Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6936208Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6936403Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6936470Z return mod(**inputs) 2025-08-14T21:54:38.6936757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6936827Z outputs = self.mobilebert( 2025-08-14T21:54:38.6937108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6937181Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6937463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6937544Z layer_outputs = layer_module( 2025-08-14T21:54:38.6937818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.6937911Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.6938198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.6938321Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.6938624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.6938747Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.6939024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6939154Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6939180Z 2025-08-14T21:54:38.6939269Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6939389Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6939687Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6939792Z return mod(**inputs) 2025-08-14T21:54:38.6940108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6940186Z outputs = self.mobilebert( 2025-08-14T21:54:38.6940522Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6940610Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6940903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6940989Z layer_outputs = layer_module( 2025-08-14T21:54:38.6941279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.6941379Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.6941679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.6941942Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.6942257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.6942376Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.6942381Z 2025-08-14T21:54:38.6942466Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6942589Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6942806Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6942878Z return mod(**inputs) 2025-08-14T21:54:38.6943184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6943348Z outputs = self.mobilebert( 2025-08-14T21:54:38.6943702Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6943805Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6944125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6944271Z layer_outputs = layer_module( 2025-08-14T21:54:38.6944574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.6944992Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.6945317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.6945556Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.6945878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.6946036Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.6946452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6946586Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6946591Z 2025-08-14T21:54:38.6946728Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6946900Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6947161Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6947255Z return mod(**inputs) 2025-08-14T21:54:38.6969815Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6970191Z outputs = self.mobilebert( 2025-08-14T21:54:38.6970539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6970629Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6970927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6971014Z layer_outputs = layer_module( 2025-08-14T21:54:38.6971294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:54:38.6971433Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:54:38.6971724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.6971842Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.6971851Z 2025-08-14T21:54:38.6971937Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6972056Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6972264Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6972343Z return mod(**inputs) 2025-08-14T21:54:38.6972625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6972702Z outputs = self.mobilebert( 2025-08-14T21:54:38.6972986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6973064Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6973345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6973420Z layer_outputs = layer_module( 2025-08-14T21:54:38.6973690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.6973859Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.6974134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:54:38.6974259Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:54:38.6974541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6974637Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6974642Z 2025-08-14T21:54:38.6974731Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6974837Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6975039Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6975113Z return mod(**inputs) 2025-08-14T21:54:38.6975391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6975508Z outputs = self.mobilebert( 2025-08-14T21:54:38.6975782Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6975891Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6976196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6976267Z layer_outputs = layer_module( 2025-08-14T21:54:38.6976545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.6976730Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.6977008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:54:38.6977145Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:54:38.6977427Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:54:38.6977547Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.6977840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6977935Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6977940Z 2025-08-14T21:54:38.6978028Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6978137Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6978343Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6978417Z return mod(**inputs) 2025-08-14T21:54:38.6978702Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6978774Z outputs = self.mobilebert( 2025-08-14T21:54:38.6979060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6979137Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6979425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6979579Z layer_outputs = layer_module( 2025-08-14T21:54:38.6979885Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:54:38.6980064Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:54:38.6980374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:54:38.6980505Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:54:38.6980806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:54:38.6980904Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:54:38.6981210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6981309Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6981314Z 2025-08-14T21:54:38.6981409Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6981496Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6981578Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6981668Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6981749Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6981830Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6981946Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6982029Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6982108Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6982197Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6982333Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6982575Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6982648Z return mod(**inputs) 2025-08-14T21:54:38.6982955Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6983055Z outputs = self.mobilebert( 2025-08-14T21:54:38.6983328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6983403Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6983717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6983794Z layer_outputs = layer_module( 2025-08-14T21:54:38.6984093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:54:38.6984189Z self_attention_outputs = self.attention( 2025-08-14T21:54:38.6984481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:54:38.6984621Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:54:38.6984912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:54:38.6985051Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.6985353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6985448Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6985452Z 2025-08-14T21:54:38.6985542Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6985652Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6985859Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6985938Z return mod(**inputs) 2025-08-14T21:54:38.6986229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6986314Z outputs = self.mobilebert( 2025-08-14T21:54:38.6986601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6986677Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6986986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6987062Z layer_outputs = layer_module( 2025-08-14T21:54:38.6987352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.6987464Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.6987758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.6987887Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.6988178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.6988298Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.6988302Z 2025-08-14T21:54:38.6988392Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6988519Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6988741Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6988811Z return mod(**inputs) 2025-08-14T21:54:38.6989130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6989232Z outputs = self.mobilebert( 2025-08-14T21:54:38.6989528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6989657Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6990112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6990202Z layer_outputs = layer_module( 2025-08-14T21:54:38.6990512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.6990614Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.6990915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.6991061Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.6991357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.6991496Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.6991788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6991885Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6991889Z 2025-08-14T21:54:38.6991981Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6992090Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6992318Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6992384Z return mod(**inputs) 2025-08-14T21:54:38.6992664Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6992746Z outputs = self.mobilebert( 2025-08-14T21:54:38.6993020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6993096Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6993379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6993449Z layer_outputs = layer_module( 2025-08-14T21:54:38.6993730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.6993823Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.6994097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.6994220Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.6994513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.6994637Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.6994642Z 2025-08-14T21:54:38.6994724Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6994832Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6995049Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6995147Z return mod(**inputs) 2025-08-14T21:54:38.6995445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6995529Z outputs = self.mobilebert( 2025-08-14T21:54:38.6995856Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6995972Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6996262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6996352Z layer_outputs = layer_module( 2025-08-14T21:54:38.6996642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.6996743Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.6997042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.6997183Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.6997479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.6997617Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.6997918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.6998018Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.6998022Z 2025-08-14T21:54:38.6998112Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.6998222Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.6998443Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.6998515Z return mod(**inputs) 2025-08-14T21:54:38.6998816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.6998900Z outputs = self.mobilebert( 2025-08-14T21:54:38.6999199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.6999279Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.6999587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.6999664Z layer_outputs = layer_module( 2025-08-14T21:54:38.6999970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7000071Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7000370Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7000499Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7000795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7000925Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7000930Z 2025-08-14T21:54:38.7001020Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7001122Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7001331Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7001398Z return mod(**inputs) 2025-08-14T21:54:38.7001682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7001761Z outputs = self.mobilebert( 2025-08-14T21:54:38.7002060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7002145Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7002452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7002545Z layer_outputs = layer_module( 2025-08-14T21:54:38.7002851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7002968Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7003270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7003400Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7003694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7003830Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7004121Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7004221Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7004234Z 2025-08-14T21:54:38.7004316Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7004425Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7004645Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7004714Z return mod(**inputs) 2025-08-14T21:54:38.7005013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7005100Z outputs = self.mobilebert( 2025-08-14T21:54:38.7005404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7005490Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7005796Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7005872Z layer_outputs = layer_module( 2025-08-14T21:54:38.7006175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:54:38.7006304Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:54:38.7006596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7006721Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7006724Z 2025-08-14T21:54:38.7006809Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7006926Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7007136Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7007208Z return mod(**inputs) 2025-08-14T21:54:38.7007514Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7007590Z outputs = self.mobilebert( 2025-08-14T21:54:38.7007892Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7007970Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7008260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7008344Z layer_outputs = layer_module( 2025-08-14T21:54:38.7008657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.7008825Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.7009147Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:54:38.7009295Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:54:38.7009600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7009716Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7009721Z 2025-08-14T21:54:38.7009805Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7009921Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7010133Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7010211Z return mod(**inputs) 2025-08-14T21:54:38.7010509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7010587Z outputs = self.mobilebert( 2025-08-14T21:54:38.7010892Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7010970Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7011275Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7011360Z layer_outputs = layer_module( 2025-08-14T21:54:38.7011655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.7011830Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.7012122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:54:38.7012252Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:54:38.7012558Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:54:38.7012686Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7012988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7013086Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7013090Z 2025-08-14T21:54:38.7013174Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7013290Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7013504Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7013572Z return mod(**inputs) 2025-08-14T21:54:38.7013876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7013953Z outputs = self.mobilebert( 2025-08-14T21:54:38.7014261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7014339Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7014647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7014731Z layer_outputs = layer_module( 2025-08-14T21:54:38.7015038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:54:38.7015237Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:54:38.7015542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:54:38.7015679Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:54:38.7016000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:54:38.7016095Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:54:38.7016410Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7016526Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7016530Z 2025-08-14T21:54:38.7016613Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7016702Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7016784Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7016866Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7016957Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7017035Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7017131Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7017218Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7017297Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7017386Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7017497Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7017707Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7017787Z return mod(**inputs) 2025-08-14T21:54:38.7018097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7018172Z outputs = self.mobilebert( 2025-08-14T21:54:38.7018485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7018562Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7018874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7018952Z layer_outputs = layer_module( 2025-08-14T21:54:38.7019270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:54:38.7019377Z self_attention_outputs = self.attention( 2025-08-14T21:54:38.7019787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:54:38.7019937Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:54:38.7020245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:54:38.7020382Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7020749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7020864Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7020868Z 2025-08-14T21:54:38.7020952Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7021072Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7021282Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7021361Z return mod(**inputs) 2025-08-14T21:54:38.7021672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7021748Z outputs = self.mobilebert( 2025-08-14T21:54:38.7022089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7022170Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7022474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7022584Z layer_outputs = layer_module( 2025-08-14T21:54:38.7022910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7023019Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7023343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7023464Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7023774Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7023893Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7023897Z 2025-08-14T21:54:38.7023987Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7024098Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7024308Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7024388Z return mod(**inputs) 2025-08-14T21:54:38.7024687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7024764Z outputs = self.mobilebert( 2025-08-14T21:54:38.7025069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7025146Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7025451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7025523Z layer_outputs = layer_module( 2025-08-14T21:54:38.7025816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7025918Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7026210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7026349Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7026642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7026768Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7027081Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7027178Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7027181Z 2025-08-14T21:54:38.7027273Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7027383Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7027598Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7027677Z return mod(**inputs) 2025-08-14T21:54:38.7027973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7028051Z outputs = self.mobilebert( 2025-08-14T21:54:38.7028352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7028428Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7028756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7028828Z layer_outputs = layer_module( 2025-08-14T21:54:38.7029102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7029222Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7029518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7029692Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7029966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7030075Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7030079Z 2025-08-14T21:54:38.7030168Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7030270Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7030474Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7030541Z return mod(**inputs) 2025-08-14T21:54:38.7030821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7030897Z outputs = self.mobilebert( 2025-08-14T21:54:38.7031170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7031243Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7031526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7031595Z layer_outputs = layer_module( 2025-08-14T21:54:38.7031880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7031972Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7032248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7032390Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7032682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7032820Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7033113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7033209Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7033213Z 2025-08-14T21:54:38.7033311Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7033421Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7033631Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7033713Z return mod(**inputs) 2025-08-14T21:54:38.7034010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7034091Z outputs = self.mobilebert( 2025-08-14T21:54:38.7034384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7034462Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7034762Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7034836Z layer_outputs = layer_module( 2025-08-14T21:54:38.7035152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7035259Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7035549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7036420Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7036724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7036864Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7036868Z 2025-08-14T21:54:38.7036963Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7037071Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7037287Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7037358Z return mod(**inputs) 2025-08-14T21:54:38.7037651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7037736Z outputs = self.mobilebert( 2025-08-14T21:54:38.7038030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7038111Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7038410Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7038490Z layer_outputs = layer_module( 2025-08-14T21:54:38.7038787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7038885Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7039179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7039316Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7039606Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7039741Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7040030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7040128Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7040132Z 2025-08-14T21:54:38.7040221Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7040329Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7040538Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7040613Z return mod(**inputs) 2025-08-14T21:54:38.7040905Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7040989Z outputs = self.mobilebert( 2025-08-14T21:54:38.7041279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7041357Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7041655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7041732Z layer_outputs = layer_module( 2025-08-14T21:54:38.7042206Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:54:38.7042341Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:54:38.7042701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7042830Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7042865Z 2025-08-14T21:54:38.7042952Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7043059Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7043298Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7043369Z return mod(**inputs) 2025-08-14T21:54:38.7043680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7043793Z outputs = self.mobilebert( 2025-08-14T21:54:38.7044092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7044179Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7044476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7044560Z layer_outputs = layer_module( 2025-08-14T21:54:38.7044862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.7045033Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.7045342Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:54:38.7045476Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:54:38.7045776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7045883Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7045886Z 2025-08-14T21:54:38.7045968Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7046084Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7046293Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7046364Z return mod(**inputs) 2025-08-14T21:54:38.7046674Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7046750Z outputs = self.mobilebert( 2025-08-14T21:54:38.7047056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7047134Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7047430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7047513Z layer_outputs = layer_module( 2025-08-14T21:54:38.7047807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.7047972Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.7048280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:54:38.7048407Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:54:38.7048710Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:54:38.7048837Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7049132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7049252Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7049256Z 2025-08-14T21:54:38.7049340Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7049455Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7049683Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7049750Z return mod(**inputs) 2025-08-14T21:54:38.7050073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7050149Z outputs = self.mobilebert( 2025-08-14T21:54:38.7050459Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7050542Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7050833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7050915Z layer_outputs = layer_module( 2025-08-14T21:54:38.7051216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:54:38.7051388Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:54:38.7051706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:54:38.7051823Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:54:38.7052122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:54:38.7052217Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:54:38.7052519Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7052624Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7052628Z 2025-08-14T21:54:38.7052711Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7052799Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7052880Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7052960Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7053049Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7053128Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7053208Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7053294Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7053376Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7053454Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7053571Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7053783Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7053861Z return mod(**inputs) 2025-08-14T21:54:38.7054156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7054232Z outputs = self.mobilebert( 2025-08-14T21:54:38.7054532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7054610Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7054903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7054987Z layer_outputs = layer_module( 2025-08-14T21:54:38.7055274Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:54:38.7055374Z self_attention_outputs = self.attention( 2025-08-14T21:54:38.7055696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:54:38.7055827Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:54:38.7056125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:54:38.7056271Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7056605Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7056719Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7056723Z 2025-08-14T21:54:38.7056805Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7056919Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7057124Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7057193Z return mod(**inputs) 2025-08-14T21:54:38.7057506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7057580Z outputs = self.mobilebert( 2025-08-14T21:54:38.7057893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7057971Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7058264Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7058348Z layer_outputs = layer_module( 2025-08-14T21:54:38.7058662Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7058770Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7059083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7059217Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7059575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7059706Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7059712Z 2025-08-14T21:54:38.7059799Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7059914Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7060127Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7060205Z return mod(**inputs) 2025-08-14T21:54:38.7060514Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7060592Z outputs = self.mobilebert( 2025-08-14T21:54:38.7060898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7060976Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7061283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7061369Z layer_outputs = layer_module( 2025-08-14T21:54:38.7061671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7061782Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7062083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7062214Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7062539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7062669Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7062984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7063102Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7063125Z 2025-08-14T21:54:38.7063211Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7063325Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7063553Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7063627Z return mod(**inputs) 2025-08-14T21:54:38.7063943Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7064018Z outputs = self.mobilebert( 2025-08-14T21:54:38.7064326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7064402Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7064708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7064791Z layer_outputs = layer_module( 2025-08-14T21:54:38.7065086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7065191Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7065497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7065614Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7065916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7066030Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7066034Z 2025-08-14T21:54:38.7066126Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7066232Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7066444Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7066521Z return mod(**inputs) 2025-08-14T21:54:38.7066821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7066897Z outputs = self.mobilebert( 2025-08-14T21:54:38.7067202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7067277Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7067581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7067652Z layer_outputs = layer_module( 2025-08-14T21:54:38.7067927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7068030Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7068309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7068432Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7068721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7068840Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7069146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7069240Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7069244Z 2025-08-14T21:54:38.7069342Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7069454Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7069685Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7069762Z return mod(**inputs) 2025-08-14T21:54:38.7070043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7070133Z outputs = self.mobilebert( 2025-08-14T21:54:38.7070415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7070489Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7070766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7070846Z layer_outputs = layer_module( 2025-08-14T21:54:38.7071122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7071226Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7071499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7071613Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7071899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7072008Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7072012Z 2025-08-14T21:54:38.7072101Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7072207Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7072403Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7072481Z return mod(**inputs) 2025-08-14T21:54:38.7072762Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7072833Z outputs = self.mobilebert( 2025-08-14T21:54:38.7073116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7073193Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7073475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7073546Z layer_outputs = layer_module( 2025-08-14T21:54:38.7073825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7073926Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7074202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7074331Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7074604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7074725Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7075007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7075098Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7075102Z 2025-08-14T21:54:38.7075230Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7075335Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7075617Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7075756Z return mod(**inputs) 2025-08-14T21:54:38.7076216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7076295Z outputs = self.mobilebert( 2025-08-14T21:54:38.7076689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7076790Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7077074Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7077146Z layer_outputs = layer_module( 2025-08-14T21:54:38.7077427Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:54:38.7077556Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:54:38.7077834Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7077945Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7077957Z 2025-08-14T21:54:38.7078037Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7078140Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7078349Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7078416Z return mod(**inputs) 2025-08-14T21:54:38.7078701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7078785Z outputs = self.mobilebert( 2025-08-14T21:54:38.7079063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7079147Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7079426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7079498Z layer_outputs = layer_module( 2025-08-14T21:54:38.7079783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.7079943Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.7080220Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:54:38.7080351Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:54:38.7080629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7080727Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7080732Z 2025-08-14T21:54:38.7080811Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7080917Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7081124Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7081188Z return mod(**inputs) 2025-08-14T21:54:38.7081478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7081549Z outputs = self.mobilebert( 2025-08-14T21:54:38.7081841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7081923Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7082198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7082286Z layer_outputs = layer_module( 2025-08-14T21:54:38.7082585Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.7082742Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.7083035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:54:38.7083172Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:54:38.7083440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:54:38.7083567Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7083833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7083932Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7083935Z 2025-08-14T21:54:38.7084012Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7084114Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7084313Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7084379Z return mod(**inputs) 2025-08-14T21:54:38.7084648Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7084724Z outputs = self.mobilebert( 2025-08-14T21:54:38.7084994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7085072Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7085345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7085417Z layer_outputs = layer_module( 2025-08-14T21:54:38.7085702Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:54:38.7085868Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:54:38.7086155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:54:38.7086265Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:54:38.7086542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:54:38.7086636Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:54:38.7086913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7087015Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7087018Z 2025-08-14T21:54:38.7087096Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7087175Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7087263Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7087338Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7087415Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7087499Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7087574Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7087649Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7087734Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7087828Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7087943Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7088142Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7088227Z return mod(**inputs) 2025-08-14T21:54:38.7088533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7088605Z outputs = self.mobilebert( 2025-08-14T21:54:38.7088878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7088979Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7089252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7089340Z layer_outputs = layer_module( 2025-08-14T21:54:38.7089608Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:54:38.7089694Z self_attention_outputs = self.attention( 2025-08-14T21:54:38.7089971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:54:38.7090093Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:54:38.7090369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:54:38.7090489Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7090758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7090854Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7090858Z 2025-08-14T21:54:38.7090934Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7091046Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7091242Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7091307Z return mod(**inputs) 2025-08-14T21:54:38.7091589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7091659Z outputs = self.mobilebert( 2025-08-14T21:54:38.7091928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7092010Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7092285Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7092364Z layer_outputs = layer_module( 2025-08-14T21:54:38.7092645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7092741Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7093026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7093143Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7093424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7093537Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7093540Z 2025-08-14T21:54:38.7093619Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7093726Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7093926Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7094010Z return mod(**inputs) 2025-08-14T21:54:38.7094302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7094371Z outputs = self.mobilebert( 2025-08-14T21:54:38.7094682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7094770Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7095047Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7095146Z layer_outputs = layer_module( 2025-08-14T21:54:38.7095425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7095518Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7095803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7095925Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7096205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7096327Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7096601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7096700Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7096704Z 2025-08-14T21:54:38.7096781Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7096888Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7097084Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7097151Z return mod(**inputs) 2025-08-14T21:54:38.7097437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7097506Z outputs = self.mobilebert( 2025-08-14T21:54:38.7097790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7097863Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7098139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7098217Z layer_outputs = layer_module( 2025-08-14T21:54:38.7098490Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7098584Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7098869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7098980Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7099274Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7099385Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7099389Z 2025-08-14T21:54:38.7099467Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7099680Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7099904Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7099982Z return mod(**inputs) 2025-08-14T21:54:38.7100287Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7100398Z outputs = self.mobilebert( 2025-08-14T21:54:38.7100704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7100782Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7101163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7101262Z layer_outputs = layer_module( 2025-08-14T21:54:38.7101542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7101661Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7101956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7102085Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7102362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7102493Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7102772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7102864Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7102869Z 2025-08-14T21:54:38.7102957Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7103060Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7103259Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7103333Z return mod(**inputs) 2025-08-14T21:54:38.7103611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7103691Z outputs = self.mobilebert( 2025-08-14T21:54:38.7103967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7104039Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7104324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7104395Z layer_outputs = layer_module( 2025-08-14T21:54:38.7104679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7104773Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7105051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7105170Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7105449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7105558Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7105570Z 2025-08-14T21:54:38.7105649Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7105751Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7105958Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7106024Z return mod(**inputs) 2025-08-14T21:54:38.7106299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7106381Z outputs = self.mobilebert( 2025-08-14T21:54:38.7106655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7106757Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7107036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7107106Z layer_outputs = layer_module( 2025-08-14T21:54:38.7107406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7107527Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7107801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7107948Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7108220Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7108344Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7108627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7108716Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7108721Z 2025-08-14T21:54:38.7108807Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7108907Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7109111Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7109176Z return mod(**inputs) 2025-08-14T21:54:38.7109453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7109533Z outputs = self.mobilebert( 2025-08-14T21:54:38.7109812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7109886Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7110170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7110239Z layer_outputs = layer_module( 2025-08-14T21:54:38.7110520Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:54:38.7110641Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:54:38.7110917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7111035Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7111039Z 2025-08-14T21:54:38.7111117Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7111227Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7111425Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7111491Z return mod(**inputs) 2025-08-14T21:54:38.7111776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7111847Z outputs = self.mobilebert( 2025-08-14T21:54:38.7112120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7112199Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7112474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7112553Z layer_outputs = layer_module( 2025-08-14T21:54:38.7112831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.7113013Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.7113296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:54:38.7113416Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:54:38.7113742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7113834Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7113837Z 2025-08-14T21:54:38.7113914Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7114042Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7114238Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7114304Z return mod(**inputs) 2025-08-14T21:54:38.7114594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7114665Z outputs = self.mobilebert( 2025-08-14T21:54:38.7114949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7115026Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7115311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7115389Z layer_outputs = layer_module( 2025-08-14T21:54:38.7115657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.7115820Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.7116087Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:54:38.7116206Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:54:38.7116481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:54:38.7116600Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7116871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7116967Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7116970Z 2025-08-14T21:54:38.7117048Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7117154Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7117344Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7117409Z return mod(**inputs) 2025-08-14T21:54:38.7117687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7117755Z outputs = self.mobilebert( 2025-08-14T21:54:38.7118028Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7118102Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7118373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7118449Z layer_outputs = layer_module( 2025-08-14T21:54:38.7118718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:54:38.7118873Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:54:38.7119176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:54:38.7119285Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:54:38.7119562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:54:38.7119664Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:54:38.7119947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7120045Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7120065Z 2025-08-14T21:54:38.7120143Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7120226Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7120299Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7120371Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7120451Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7120525Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7120599Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7120680Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7120752Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7120826Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7120934Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7121131Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7121204Z return mod(**inputs) 2025-08-14T21:54:38.7121487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7121560Z outputs = self.mobilebert( 2025-08-14T21:54:38.7121846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7121919Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7122203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7122281Z layer_outputs = layer_module( 2025-08-14T21:54:38.7122572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:54:38.7122666Z self_attention_outputs = self.attention( 2025-08-14T21:54:38.7122935Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:54:38.7123060Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:54:38.7123347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:54:38.7123469Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7123756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7123848Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7123851Z 2025-08-14T21:54:38.7123930Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7124038Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7124237Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7124302Z return mod(**inputs) 2025-08-14T21:54:38.7124593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7124667Z outputs = self.mobilebert( 2025-08-14T21:54:38.7124954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7125026Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7125325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7125406Z layer_outputs = layer_module( 2025-08-14T21:54:38.7125708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7125829Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7126105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7126232Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7126516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7126624Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7126628Z 2025-08-14T21:54:38.7126715Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7126817Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7127017Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7127090Z return mod(**inputs) 2025-08-14T21:54:38.7127376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7127446Z outputs = self.mobilebert( 2025-08-14T21:54:38.7127738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7127813Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7128102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7128174Z layer_outputs = layer_module( 2025-08-14T21:54:38.7128457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7128559Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7128841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7128966Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7129254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7129378Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7129669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7129760Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7129763Z 2025-08-14T21:54:38.7129846Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7129957Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7130158Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7130234Z return mod(**inputs) 2025-08-14T21:54:38.7130520Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7130591Z outputs = self.mobilebert( 2025-08-14T21:54:38.7130879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7130954Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7131238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7131317Z layer_outputs = layer_module( 2025-08-14T21:54:38.7131621Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7131726Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7132016Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7132187Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7132475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7132604Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7132608Z 2025-08-14T21:54:38.7132695Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7132798Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7132995Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7133069Z return mod(**inputs) 2025-08-14T21:54:38.7133347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7133420Z outputs = self.mobilebert( 2025-08-14T21:54:38.7133705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7133778Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7134060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7134132Z layer_outputs = layer_module( 2025-08-14T21:54:38.7134407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7134508Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7134787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7134915Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7135191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7135312Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7135594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7135688Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7135692Z 2025-08-14T21:54:38.7135780Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7135882Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7136080Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7136155Z return mod(**inputs) 2025-08-14T21:54:38.7136433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7136506Z outputs = self.mobilebert( 2025-08-14T21:54:38.7136794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7136865Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7137147Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7137218Z layer_outputs = layer_module( 2025-08-14T21:54:38.7137495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7137595Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7137891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7138007Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7138307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7138436Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7138440Z 2025-08-14T21:54:38.7138529Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7138632Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7138855Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7138929Z return mod(**inputs) 2025-08-14T21:54:38.7139214Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7139299Z outputs = self.mobilebert( 2025-08-14T21:54:38.7139783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7139872Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7140179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7140254Z layer_outputs = layer_module( 2025-08-14T21:54:38.7140553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7140659Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7140965Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7141104Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7141401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7141526Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7142041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7142142Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7142146Z 2025-08-14T21:54:38.7142234Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7142342Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7142543Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7142618Z return mod(**inputs) 2025-08-14T21:54:38.7142901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7142972Z outputs = self.mobilebert( 2025-08-14T21:54:38.7143258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7143332Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7143622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7143693Z layer_outputs = layer_module( 2025-08-14T21:54:38.7143973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:54:38.7144104Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:54:38.7144383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7144574Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7144579Z 2025-08-14T21:54:38.7144660Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7144761Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7144969Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7145063Z return mod(**inputs) 2025-08-14T21:54:38.7145369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7145452Z outputs = self.mobilebert( 2025-08-14T21:54:38.7145728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7145839Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7146120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7146192Z layer_outputs = layer_module( 2025-08-14T21:54:38.7146474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.7146633Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.7146919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:54:38.7147040Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:54:38.7147313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7147414Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7147417Z 2025-08-14T21:54:38.7147497Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7147599Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7147803Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7147872Z return mod(**inputs) 2025-08-14T21:54:38.7148157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7148230Z outputs = self.mobilebert( 2025-08-14T21:54:38.7148508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7148590Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7148866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7148943Z layer_outputs = layer_module( 2025-08-14T21:54:38.7149215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.7149373Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.7149652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:54:38.7149776Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:54:38.7150052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:54:38.7150178Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7150451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7150547Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7150550Z 2025-08-14T21:54:38.7150627Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7150751Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7150954Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7151019Z return mod(**inputs) 2025-08-14T21:54:38.7151301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7151390Z outputs = self.mobilebert( 2025-08-14T21:54:38.7151682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7151766Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7152064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7152136Z layer_outputs = layer_module( 2025-08-14T21:54:38.7152423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:54:38.7152587Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:54:38.7152876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:54:38.7152989Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:54:38.7153266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:54:38.7153364Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:54:38.7153641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7153739Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7153742Z 2025-08-14T21:54:38.7153823Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7153903Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7153989Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7154064Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7154141Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7154225Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7154300Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7154382Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7154458Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7154533Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7154641Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7154846Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7154911Z return mod(**inputs) 2025-08-14T21:54:38.7155202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7155273Z outputs = self.mobilebert( 2025-08-14T21:54:38.7155559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7155631Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7155909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7155990Z layer_outputs = layer_module( 2025-08-14T21:54:38.7156271Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:54:38.7156362Z self_attention_outputs = self.attention( 2025-08-14T21:54:38.7156650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:54:38.7156774Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:54:38.7157082Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:54:38.7157204Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7157497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7157609Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7157613Z 2025-08-14T21:54:38.7157692Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7157801Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7158013Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7158079Z return mod(**inputs) 2025-08-14T21:54:38.7158363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7158433Z outputs = self.mobilebert( 2025-08-14T21:54:38.7158706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7158787Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7159065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7159143Z layer_outputs = layer_module( 2025-08-14T21:54:38.7159421Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7159517Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7159798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7159909Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7160195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7160305Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7160311Z 2025-08-14T21:54:38.7160391Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7160501Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7160699Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7160765Z return mod(**inputs) 2025-08-14T21:54:38.7161053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7161126Z outputs = self.mobilebert( 2025-08-14T21:54:38.7161410Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7161485Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7161758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7161837Z layer_outputs = layer_module( 2025-08-14T21:54:38.7162111Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7162206Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7162490Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7162615Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7162897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7163016Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7163313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7163414Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7163435Z 2025-08-14T21:54:38.7163514Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7163622Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7163852Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7163918Z return mod(**inputs) 2025-08-14T21:54:38.7164208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7164296Z outputs = self.mobilebert( 2025-08-14T21:54:38.7164570Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7164664Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7164938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7165015Z layer_outputs = layer_module( 2025-08-14T21:54:38.7165288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7165383Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7165663Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7165777Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7166056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7166165Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7166170Z 2025-08-14T21:54:38.7166248Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7166359Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7166553Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7166620Z return mod(**inputs) 2025-08-14T21:54:38.7166905Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7166976Z outputs = self.mobilebert( 2025-08-14T21:54:38.7167258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7167332Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7167606Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7167687Z layer_outputs = layer_module( 2025-08-14T21:54:38.7167960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7168061Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7168335Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7168460Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7168741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7168862Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7169146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7169235Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7169258Z 2025-08-14T21:54:38.7169339Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7169447Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7169642Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7169725Z return mod(**inputs) 2025-08-14T21:54:38.7170032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7170103Z outputs = self.mobilebert( 2025-08-14T21:54:38.7170382Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7170476Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7170750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7170828Z layer_outputs = layer_module( 2025-08-14T21:54:38.7171102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7171195Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7171478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7171589Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7171876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7171985Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7171990Z 2025-08-14T21:54:38.7172068Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7172175Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7172374Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7172446Z return mod(**inputs) 2025-08-14T21:54:38.7172725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7172797Z outputs = self.mobilebert( 2025-08-14T21:54:38.7173080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7173152Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7173426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7173506Z layer_outputs = layer_module( 2025-08-14T21:54:38.7173781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7173881Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7174157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7174277Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7174567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7174687Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7174970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7175063Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7175067Z 2025-08-14T21:54:38.7175146Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7175256Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7175475Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7175542Z return mod(**inputs) 2025-08-14T21:54:38.7175831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7175923Z outputs = self.mobilebert( 2025-08-14T21:54:38.7176223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7176297Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7176581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7176679Z layer_outputs = layer_module( 2025-08-14T21:54:38.7176959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:54:38.7177086Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:54:38.7177362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7177471Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7177477Z 2025-08-14T21:54:38.7177563Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7177668Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7177878Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7177954Z return mod(**inputs) 2025-08-14T21:54:38.7178249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7178333Z outputs = self.mobilebert( 2025-08-14T21:54:38.7178645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7178723Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7179020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7179096Z layer_outputs = layer_module( 2025-08-14T21:54:38.7179397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.7179675Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.7179984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:54:38.7180123Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:54:38.7180416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7180524Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7180529Z 2025-08-14T21:54:38.7180614Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7180724Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7180943Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7181012Z return mod(**inputs) 2025-08-14T21:54:38.7181308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7181393Z outputs = self.mobilebert( 2025-08-14T21:54:38.7181686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7181773Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7182093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7182171Z layer_outputs = layer_module( 2025-08-14T21:54:38.7182470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.7182654Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.7182965Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:54:38.7183106Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:54:38.7183415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:54:38.7183552Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7183847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7183947Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7183951Z 2025-08-14T21:54:38.7184047Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7184159Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7184381Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7184455Z return mod(**inputs) 2025-08-14T21:54:38.7184758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7184847Z outputs = self.mobilebert( 2025-08-14T21:54:38.7185143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7185224Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7185528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7185608Z layer_outputs = layer_module( 2025-08-14T21:54:38.7185910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:54:38.7186087Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:54:38.7186385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:54:38.7186514Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:54:38.7186809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:54:38.7186921Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:54:38.7187193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7187285Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7187289Z 2025-08-14T21:54:38.7187376Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7187458Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7187542Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7187618Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7187695Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7187777Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7187852Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7187928Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7188010Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7188085Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7188189Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7188413Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7188477Z return mod(**inputs) 2025-08-14T21:54:38.7188761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7188862Z outputs = self.mobilebert( 2025-08-14T21:54:38.7189156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7189237Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7189517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7189606Z layer_outputs = layer_module( 2025-08-14T21:54:38.7189882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:54:38.7189966Z self_attention_outputs = self.attention( 2025-08-14T21:54:38.7190243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:54:38.7190363Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:54:38.7190634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:54:38.7190760Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7191030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7191130Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7191133Z 2025-08-14T21:54:38.7191212Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7191314Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7191517Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7191582Z return mod(**inputs) 2025-08-14T21:54:38.7191862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7191942Z outputs = self.mobilebert( 2025-08-14T21:54:38.7192221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7192301Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7192578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7192650Z layer_outputs = layer_module( 2025-08-14T21:54:38.7192944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7193034Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7193313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7193420Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7193691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7193807Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7193811Z 2025-08-14T21:54:38.7193887Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7193986Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7194190Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7194255Z return mod(**inputs) 2025-08-14T21:54:38.7194541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7194639Z outputs = self.mobilebert( 2025-08-14T21:54:38.7194916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7194998Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7195291Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7195385Z layer_outputs = layer_module( 2025-08-14T21:54:38.7195660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7195772Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7196059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7196182Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7196482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7196610Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7196890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7196991Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7196995Z 2025-08-14T21:54:38.7197072Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7197173Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7197379Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7197445Z return mod(**inputs) 2025-08-14T21:54:38.7197734Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7197804Z outputs = self.mobilebert( 2025-08-14T21:54:38.7198083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7198166Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7198446Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7198515Z layer_outputs = layer_module( 2025-08-14T21:54:38.7198801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7198896Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7199184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7199294Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7199595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7199712Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7199717Z 2025-08-14T21:54:38.7199794Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7199905Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7200104Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7200169Z return mod(**inputs) 2025-08-14T21:54:38.7200461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7200531Z outputs = self.mobilebert( 2025-08-14T21:54:38.7200810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7200916Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7201199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7201276Z layer_outputs = layer_module( 2025-08-14T21:54:38.7201580Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7201691Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7201980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7202126Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7202416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7202538Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7202813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7202912Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7202917Z 2025-08-14T21:54:38.7202994Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7203106Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7203307Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7203369Z return mod(**inputs) 2025-08-14T21:54:38.7203649Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7203718Z outputs = self.mobilebert( 2025-08-14T21:54:38.7203995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7204077Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7204353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7204424Z layer_outputs = layer_module( 2025-08-14T21:54:38.7204709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7204802Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7205088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7205200Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7205476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7205595Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7205599Z 2025-08-14T21:54:38.7205678Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7205786Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7205985Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7206053Z return mod(**inputs) 2025-08-14T21:54:38.7206342Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7206413Z outputs = self.mobilebert( 2025-08-14T21:54:38.7206689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7206770Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7207047Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7207147Z layer_outputs = layer_module( 2025-08-14T21:54:38.7207427Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7207520Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7207824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7207962Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7208250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7208388Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7208665Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7208767Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7208771Z 2025-08-14T21:54:38.7208849Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7208960Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7209156Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7209225Z return mod(**inputs) 2025-08-14T21:54:38.7209513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7209585Z outputs = self.mobilebert( 2025-08-14T21:54:38.7209872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7209952Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7210229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7210308Z layer_outputs = layer_module( 2025-08-14T21:54:38.7210588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:54:38.7210710Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:54:38.7211000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7211110Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7211113Z 2025-08-14T21:54:38.7211201Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7211306Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7211501Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7211572Z return mod(**inputs) 2025-08-14T21:54:38.7211856Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7211927Z outputs = self.mobilebert( 2025-08-14T21:54:38.7212211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7212285Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7212573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7212644Z layer_outputs = layer_module( 2025-08-14T21:54:38.7212923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.7213089Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.7213366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:54:38.7213529Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:54:38.7213809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7213923Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7213926Z 2025-08-14T21:54:38.7214014Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7214139Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7214339Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7214433Z return mod(**inputs) 2025-08-14T21:54:38.7214714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7214794Z outputs = self.mobilebert( 2025-08-14T21:54:38.7215074Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7215149Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7215435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7215507Z layer_outputs = layer_module( 2025-08-14T21:54:38.7215781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.7215947Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.7216223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:54:38.7216353Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:54:38.7216634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:54:38.7216753Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7217038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7217131Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7217134Z 2025-08-14T21:54:38.7217223Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7217327Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7217526Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7217602Z return mod(**inputs) 2025-08-14T21:54:38.7217883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7217960Z outputs = self.mobilebert( 2025-08-14T21:54:38.7218240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7218316Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7218596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7218668Z layer_outputs = layer_module( 2025-08-14T21:54:38.7218950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:54:38.7219121Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:54:38.7219415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:54:38.7219617Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:54:38.7219951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:54:38.7220048Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:54:38.7220352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7220468Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7220473Z 2025-08-14T21:54:38.7220591Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7220677Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7220758Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7220865Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7220945Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7221025Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7221117Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7221196Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7221278Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7221363Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7221471Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7221687Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7221759Z return mod(**inputs) 2025-08-14T21:54:38.7222060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7222142Z outputs = self.mobilebert( 2025-08-14T21:54:38.7222448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7222527Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7222836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7222915Z layer_outputs = layer_module( 2025-08-14T21:54:38.7223223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:54:38.7223315Z self_attention_outputs = self.attention( 2025-08-14T21:54:38.7223610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:54:38.7223750Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:54:38.7224055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:54:38.7224194Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7224496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7224593Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7224599Z 2025-08-14T21:54:38.7224707Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7224812Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7225023Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7225103Z return mod(**inputs) 2025-08-14T21:54:38.7225402Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7225488Z outputs = self.mobilebert( 2025-08-14T21:54:38.7225795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7225873Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7226174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7226275Z layer_outputs = layer_module( 2025-08-14T21:54:38.7226591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7226694Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7227015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7227166Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7227472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7227611Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7227624Z 2025-08-14T21:54:38.7227708Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7227815Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7228031Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7228114Z return mod(**inputs) 2025-08-14T21:54:38.7228399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7228478Z outputs = self.mobilebert( 2025-08-14T21:54:38.7228751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7228830Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7229098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7229168Z layer_outputs = layer_module( 2025-08-14T21:54:38.7229443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7229534Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7229804Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7229934Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7230205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7230331Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7230597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7230686Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7230690Z 2025-08-14T21:54:38.7230774Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7230872Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7231072Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7231135Z return mod(**inputs) 2025-08-14T21:54:38.7231404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7231481Z outputs = self.mobilebert( 2025-08-14T21:54:38.7231749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7231819Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7232094Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7232163Z layer_outputs = layer_module( 2025-08-14T21:54:38.7232439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7232547Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7232823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7232940Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7233260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7233389Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7233393Z 2025-08-14T21:54:38.7233472Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7233597Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7233794Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7233858Z return mod(**inputs) 2025-08-14T21:54:38.7234133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7234211Z outputs = self.mobilebert( 2025-08-14T21:54:38.7234478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7234557Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7234827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7234895Z layer_outputs = layer_module( 2025-08-14T21:54:38.7235172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7235262Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7235540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7235660Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7235928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7236048Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7236323Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7236411Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7236422Z 2025-08-14T21:54:38.7236498Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7236599Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7236797Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7236861Z return mod(**inputs) 2025-08-14T21:54:38.7237132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7237209Z outputs = self.mobilebert( 2025-08-14T21:54:38.7237475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7237554Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7237824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7237892Z layer_outputs = layer_module( 2025-08-14T21:54:38.7238169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7238261Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7238530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7238665Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7238932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7239044Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7239064Z 2025-08-14T21:54:38.7239143Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7239260Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7239460Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7239525Z return mod(**inputs) 2025-08-14T21:54:38.7239822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7239893Z outputs = self.mobilebert( 2025-08-14T21:54:38.7240159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7240237Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7240504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7240575Z layer_outputs = layer_module( 2025-08-14T21:54:38.7240849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7240939Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7241215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7241338Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7241605Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7241731Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7242197Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7242301Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7242308Z 2025-08-14T21:54:38.7242389Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7242496Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7242705Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7242772Z return mod(**inputs) 2025-08-14T21:54:38.7243067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7243146Z outputs = self.mobilebert( 2025-08-14T21:54:38.7243418Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7243499Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7243775Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7243847Z layer_outputs = layer_module( 2025-08-14T21:54:38.7244144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:54:38.7244261Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:54:38.7244540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7244651Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7244654Z 2025-08-14T21:54:38.7244732Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7244841Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7245099Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7245165Z return mod(**inputs) 2025-08-14T21:54:38.7245446Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7245545Z outputs = self.mobilebert( 2025-08-14T21:54:38.7245841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7245913Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7246204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7246281Z layer_outputs = layer_module( 2025-08-14T21:54:38.7246547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.7246701Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.7246976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:54:38.7247124Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:54:38.7247399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7247489Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7247494Z 2025-08-14T21:54:38.7247571Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7247681Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7247878Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7247953Z return mod(**inputs) 2025-08-14T21:54:38.7248230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7248302Z outputs = self.mobilebert( 2025-08-14T21:54:38.7248583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7248659Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7248936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7249016Z layer_outputs = layer_module( 2025-08-14T21:54:38.7249298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.7249460Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.7249734Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:54:38.7249854Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:54:38.7250136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:54:38.7250256Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7250541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7250633Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7250638Z 2025-08-14T21:54:38.7250717Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7250827Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7251023Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7251093Z return mod(**inputs) 2025-08-14T21:54:38.7251392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7251463Z outputs = self.mobilebert( 2025-08-14T21:54:38.7251743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7251846Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7252136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7252231Z layer_outputs = layer_module( 2025-08-14T21:54:38.7252506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:54:38.7252675Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:54:38.7252956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:54:38.7253062Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:54:38.7253344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:54:38.7253432Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:54:38.7253715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7253806Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7253811Z 2025-08-14T21:54:38.7253887Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7253970Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7254044Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7254118Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7254201Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7254275Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7254357Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7254431Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7254505Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7254589Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7254691Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7254889Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7254963Z return mod(**inputs) 2025-08-14T21:54:38.7255249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7255319Z outputs = self.mobilebert( 2025-08-14T21:54:38.7255603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7255678Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7255965Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7256036Z layer_outputs = layer_module( 2025-08-14T21:54:38.7256314Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:54:38.7256411Z self_attention_outputs = self.attention( 2025-08-14T21:54:38.7256698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:54:38.7256839Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:54:38.7257131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:54:38.7257278Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7257576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7257672Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7257694Z 2025-08-14T21:54:38.7257783Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7257893Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7258128Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7258206Z return mod(**inputs) 2025-08-14T21:54:38.7258519Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7258594Z outputs = self.mobilebert( 2025-08-14T21:54:38.7258891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7258969Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7259264Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7259339Z layer_outputs = layer_module( 2025-08-14T21:54:38.7259689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7259805Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7260103Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7260223Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7260522Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7260645Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7260649Z 2025-08-14T21:54:38.7260733Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7260835Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7261034Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7261110Z return mod(**inputs) 2025-08-14T21:54:38.7261392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7261471Z outputs = self.mobilebert( 2025-08-14T21:54:38.7261760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7261835Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7262121Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7262193Z layer_outputs = layer_module( 2025-08-14T21:54:38.7262471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7262574Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7262853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7262988Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7263265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7263386Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7263672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7263788Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7263792Z 2025-08-14T21:54:38.7263880Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7263981Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7264178Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7264271Z return mod(**inputs) 2025-08-14T21:54:38.7264576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7264648Z outputs = self.mobilebert( 2025-08-14T21:54:38.7264949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7265022Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7265304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7265375Z layer_outputs = layer_module( 2025-08-14T21:54:38.7265654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7265754Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7266033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7266149Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7266425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7266536Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7266539Z 2025-08-14T21:54:38.7266624Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7266724Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7266924Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7266996Z return mod(**inputs) 2025-08-14T21:54:38.7267276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7267356Z outputs = self.mobilebert( 2025-08-14T21:54:38.7267634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7267705Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7267989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7268060Z layer_outputs = layer_module( 2025-08-14T21:54:38.7268346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7268438Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7268718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7268846Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7269127Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7269245Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7269528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7269619Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7269623Z 2025-08-14T21:54:38.7269706Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7269807Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7270032Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7270138Z return mod(**inputs) 2025-08-14T21:54:38.7270532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7270662Z outputs = self.mobilebert( 2025-08-14T21:54:38.7270958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7271032Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7271309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7271395Z layer_outputs = layer_module( 2025-08-14T21:54:38.7271673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7271775Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7272050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7272168Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7272444Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7272553Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7272557Z 2025-08-14T21:54:38.7272644Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7272747Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7272952Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7273017Z return mod(**inputs) 2025-08-14T21:54:38.7273296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7273376Z outputs = self.mobilebert( 2025-08-14T21:54:38.7273649Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7273723Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7274018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7274088Z layer_outputs = layer_module( 2025-08-14T21:54:38.7274370Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7274464Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7274737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7274869Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7275146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7275272Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7275549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7275639Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7275643Z 2025-08-14T21:54:38.7275729Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7275831Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7276026Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7276098Z return mod(**inputs) 2025-08-14T21:54:38.7276394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7276475Z outputs = self.mobilebert( 2025-08-14T21:54:38.7276749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7276844Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7277144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7277216Z layer_outputs = layer_module( 2025-08-14T21:54:38.7277496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:54:38.7277633Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:54:38.7277912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7278030Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7278034Z 2025-08-14T21:54:38.7278111Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7278211Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7278418Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7278484Z return mod(**inputs) 2025-08-14T21:54:38.7278769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7278842Z outputs = self.mobilebert( 2025-08-14T21:54:38.7279118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7279200Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7279482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7279552Z layer_outputs = layer_module( 2025-08-14T21:54:38.7279832Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.7279989Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.7280273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:54:38.7280393Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:54:38.7280668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7280765Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7280769Z 2025-08-14T21:54:38.7280847Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7280958Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7281151Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7281217Z return mod(**inputs) 2025-08-14T21:54:38.7281502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7281573Z outputs = self.mobilebert( 2025-08-14T21:54:38.7281853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7281928Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7282201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7282277Z layer_outputs = layer_module( 2025-08-14T21:54:38.7282567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.7282724Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.7283006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:54:38.7283584Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:54:38.7283888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:54:38.7284010Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7284303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7284403Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7284406Z 2025-08-14T21:54:38.7284485Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7284600Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7284795Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7284862Z return mod(**inputs) 2025-08-14T21:54:38.7285147Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7285219Z outputs = self.mobilebert( 2025-08-14T21:54:38.7285492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7285574Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7285845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7285924Z layer_outputs = layer_module( 2025-08-14T21:54:38.7286199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:54:38.7286361Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:54:38.7286643Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:54:38.7286755Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:54:38.7287039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:54:38.7287127Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:54:38.7287405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7287507Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7287510Z 2025-08-14T21:54:38.7287591Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7287671Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7287755Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7287831Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7287912Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7287990Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7288063Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7288147Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7288223Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7288298Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7288410Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7288604Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7288670Z return mod(**inputs) 2025-08-14T21:54:38.7288975Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7289049Z outputs = self.mobilebert( 2025-08-14T21:54:38.7289334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7289428Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7289724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7289805Z layer_outputs = layer_module( 2025-08-14T21:54:38.7290083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:54:38.7290197Z self_attention_outputs = self.attention( 2025-08-14T21:54:38.7290472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:54:38.7290594Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:54:38.7290877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:54:38.7290999Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7291280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7291379Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7291382Z 2025-08-14T21:54:38.7291460Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7291574Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7291783Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7291865Z return mod(**inputs) 2025-08-14T21:54:38.7292254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7292325Z outputs = self.mobilebert( 2025-08-14T21:54:38.7292610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7292686Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7292965Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7293045Z layer_outputs = layer_module( 2025-08-14T21:54:38.7293317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7293413Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7293700Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7293811Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7294092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7294200Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7294206Z 2025-08-14T21:54:38.7294285Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7294396Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7294594Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7294668Z return mod(**inputs) 2025-08-14T21:54:38.7294948Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7295020Z outputs = self.mobilebert( 2025-08-14T21:54:38.7295318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7295393Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7295666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7295765Z layer_outputs = layer_module( 2025-08-14T21:54:38.7296061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7296164Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7296441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7296582Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7296866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7296986Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7297268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7297359Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7297365Z 2025-08-14T21:54:38.7297444Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7297559Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7297765Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7297834Z return mod(**inputs) 2025-08-14T21:54:38.7298140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7298213Z outputs = self.mobilebert( 2025-08-14T21:54:38.7298519Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7298594Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7298888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7298970Z layer_outputs = layer_module( 2025-08-14T21:54:38.7299262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7299365Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7299730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7299858Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7300171Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7300290Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7300295Z 2025-08-14T21:54:38.7300386Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7300497Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7300721Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7300802Z return mod(**inputs) 2025-08-14T21:54:38.7301112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7301185Z outputs = self.mobilebert( 2025-08-14T21:54:38.7301478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7301552Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7301839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7301940Z layer_outputs = layer_module( 2025-08-14T21:54:38.7302219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7302338Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7302630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7302754Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7303035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7303172Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7303460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7303553Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7303556Z 2025-08-14T21:54:38.7303634Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7303746Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7303943Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7304016Z return mod(**inputs) 2025-08-14T21:54:38.7304299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7304370Z outputs = self.mobilebert( 2025-08-14T21:54:38.7304655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7304729Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7305004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7305083Z layer_outputs = layer_module( 2025-08-14T21:54:38.7305357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7305458Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7305735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7305845Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7306128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7306237Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7306241Z 2025-08-14T21:54:38.7306329Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7306432Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7306631Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7306704Z return mod(**inputs) 2025-08-14T21:54:38.7306981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7307053Z outputs = self.mobilebert( 2025-08-14T21:54:38.7307343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7307415Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7307701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7307771Z layer_outputs = layer_module( 2025-08-14T21:54:38.7308046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7308174Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7308453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7308602Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7308894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7309016Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7309297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7309408Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7309411Z 2025-08-14T21:54:38.7309490Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7309600Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7309796Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7309867Z return mod(**inputs) 2025-08-14T21:54:38.7310145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7310218Z outputs = self.mobilebert( 2025-08-14T21:54:38.7310503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7310575Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7310857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7310928Z layer_outputs = layer_module( 2025-08-14T21:54:38.7311200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:54:38.7311329Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:54:38.7311603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7311713Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7311725Z 2025-08-14T21:54:38.7311804Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7311908Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7312109Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7312174Z return mod(**inputs) 2025-08-14T21:54:38.7312459Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7312535Z outputs = self.mobilebert( 2025-08-14T21:54:38.7312803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7312881Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7313148Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7313217Z layer_outputs = layer_module( 2025-08-14T21:54:38.7313492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.7313645Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.7313914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:54:38.7314037Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:54:38.7314324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7314424Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7314427Z 2025-08-14T21:54:38.7314504Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7314622Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7314819Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7314898Z return mod(**inputs) 2025-08-14T21:54:38.7315176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7315282Z outputs = self.mobilebert( 2025-08-14T21:54:38.7315548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7315627Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7315896Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7315963Z layer_outputs = layer_module( 2025-08-14T21:54:38.7316240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.7316394Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.7316674Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:54:38.7316795Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:54:38.7317063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:54:38.7317189Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7317457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7317555Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7317558Z 2025-08-14T21:54:38.7317635Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7317736Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7317937Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7318001Z return mod(**inputs) 2025-08-14T21:54:38.7318274Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7318352Z outputs = self.mobilebert( 2025-08-14T21:54:38.7318622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7318699Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7318970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7319039Z layer_outputs = layer_module( 2025-08-14T21:54:38.7319317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:54:38.7319477Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:54:38.7319755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:54:38.7319864Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:54:38.7320130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:54:38.7320223Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:54:38.7320510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7320600Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7320610Z 2025-08-14T21:54:38.7320688Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7320781Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7320864Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7320950Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7321024Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7321105Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7321196Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7321268Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7321351Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7321425Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7321531Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7321726Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7321792Z return mod(**inputs) 2025-08-14T21:54:38.7322072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7322144Z outputs = self.mobilebert( 2025-08-14T21:54:38.7322412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7322492Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7322761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7322837Z layer_outputs = layer_module( 2025-08-14T21:54:38.7323106Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:54:38.7323192Z self_attention_outputs = self.attention( 2025-08-14T21:54:38.7323474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:54:38.7323595Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:54:38.7323870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:54:38.7323999Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7324269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7324368Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7324372Z 2025-08-14T21:54:38.7324447Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7324548Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7324751Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7324815Z return mod(**inputs) 2025-08-14T21:54:38.7325099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7325169Z outputs = self.mobilebert( 2025-08-14T21:54:38.7325443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7325522Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7325794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7325864Z layer_outputs = layer_module( 2025-08-14T21:54:38.7326144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7326255Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7326529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7326637Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7326946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7327064Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7327067Z 2025-08-14T21:54:38.7327144Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7327276Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7327469Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7327532Z return mod(**inputs) 2025-08-14T21:54:38.7327811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7327880Z outputs = self.mobilebert( 2025-08-14T21:54:38.7328146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7328224Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7328499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7328574Z layer_outputs = layer_module( 2025-08-14T21:54:38.7328845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7328938Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7329215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7329335Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7329610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7329727Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7329999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7330095Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7330098Z 2025-08-14T21:54:38.7330173Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7330279Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7330470Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7330533Z return mod(**inputs) 2025-08-14T21:54:38.7330813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7330880Z outputs = self.mobilebert( 2025-08-14T21:54:38.7331150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7331233Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7331503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7331578Z layer_outputs = layer_module( 2025-08-14T21:54:38.7331851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7331942Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7332224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7332387Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7332657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7332771Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7332791Z 2025-08-14T21:54:38.7332870Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7332991Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7333191Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7333274Z return mod(**inputs) 2025-08-14T21:54:38.7333574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7333646Z outputs = self.mobilebert( 2025-08-14T21:54:38.7333940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7334012Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7334299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7334381Z layer_outputs = layer_module( 2025-08-14T21:54:38.7334667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7334762Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7335055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7335181Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7335476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7335601Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7335887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7335988Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7335992Z 2025-08-14T21:54:38.7336070Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7336183Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7336387Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7336456Z return mod(**inputs) 2025-08-14T21:54:38.7336753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7336825Z outputs = self.mobilebert( 2025-08-14T21:54:38.7337112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7337195Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7337489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7337569Z layer_outputs = layer_module( 2025-08-14T21:54:38.7337857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7337949Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7338241Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7338351Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7338641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7338771Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7338776Z 2025-08-14T21:54:38.7338854Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7338962Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7339176Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7339242Z return mod(**inputs) 2025-08-14T21:54:38.7339638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7339718Z outputs = self.mobilebert( 2025-08-14T21:54:38.7340036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7340117Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7340429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7340516Z layer_outputs = layer_module( 2025-08-14T21:54:38.7340824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7340931Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7341225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7341355Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7341656Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7341918Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7342219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7342328Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7342332Z 2025-08-14T21:54:38.7342417Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7342536Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7342744Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7342814Z return mod(**inputs) 2025-08-14T21:54:38.7343122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7343200Z outputs = self.mobilebert( 2025-08-14T21:54:38.7343503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7343580Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7343885Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7343971Z layer_outputs = layer_module( 2025-08-14T21:54:38.7344263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:54:38.7344392Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:54:38.7344698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7344814Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7344820Z 2025-08-14T21:54:38.7344913Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7345019Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7345226Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7345306Z return mod(**inputs) 2025-08-14T21:54:38.7345669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7345753Z outputs = self.mobilebert( 2025-08-14T21:54:38.7346042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7346146Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7346471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7346551Z layer_outputs = layer_module( 2025-08-14T21:54:38.7346872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.7347048Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.7347339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:54:38.7347476Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:54:38.7347769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7347868Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7347872Z 2025-08-14T21:54:38.7347965Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7348076Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7348291Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7348363Z return mod(**inputs) 2025-08-14T21:54:38.7348658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7348741Z outputs = self.mobilebert( 2025-08-14T21:54:38.7349032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7349111Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7349406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7349482Z layer_outputs = layer_module( 2025-08-14T21:54:38.7349779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.7349944Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.7350235Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:54:38.7350374Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:54:38.7350668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:54:38.7350800Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7351091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7351189Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7351194Z 2025-08-14T21:54:38.7351286Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7351393Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7351602Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7351680Z return mod(**inputs) 2025-08-14T21:54:38.7351974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7352056Z outputs = self.mobilebert( 2025-08-14T21:54:38.7352366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7352445Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7352763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7352863Z layer_outputs = layer_module( 2025-08-14T21:54:38.7353167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:54:38.7353358Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:54:38.7353649Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:54:38.7353773Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:54:38.7354067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:54:38.7354159Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:54:38.7354455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7354554Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7354558Z 2025-08-14T21:54:38.7354650Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7354733Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7354814Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7354909Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7354987Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7355065Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7355152Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7355230Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7355317Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7355396Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7355504Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7355724Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7355793Z return mod(**inputs) 2025-08-14T21:54:38.7356089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7356172Z outputs = self.mobilebert( 2025-08-14T21:54:38.7356463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7356549Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7356851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7356928Z layer_outputs = layer_module( 2025-08-14T21:54:38.7357225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:54:38.7357318Z self_attention_outputs = self.attention( 2025-08-14T21:54:38.7357610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:54:38.7357747Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:54:38.7358039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:54:38.7358180Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7358469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7358583Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7358587Z 2025-08-14T21:54:38.7358679Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7358789Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7359026Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7359096Z return mod(**inputs) 2025-08-14T21:54:38.7359408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7359495Z outputs = self.mobilebert( 2025-08-14T21:54:38.7359805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7359882Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7360191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7360267Z layer_outputs = layer_module( 2025-08-14T21:54:38.7360572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7360670Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7360962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7361085Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7361374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7361497Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7361501Z 2025-08-14T21:54:38.7361581Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7361688Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7361903Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7361965Z return mod(**inputs) 2025-08-14T21:54:38.7362235Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7362312Z outputs = self.mobilebert( 2025-08-14T21:54:38.7362590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7362671Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7362950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7363020Z layer_outputs = layer_module( 2025-08-14T21:54:38.7363305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7363400Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7363684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7363812Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7364093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7364221Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7364497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7364597Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7364601Z 2025-08-14T21:54:38.7364679Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7364791Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7365007Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7365073Z return mod(**inputs) 2025-08-14T21:54:38.7365343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7365439Z outputs = self.mobilebert( 2025-08-14T21:54:38.7365720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7365799Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7366082Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7366150Z layer_outputs = layer_module( 2025-08-14T21:54:38.7366426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7366520Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7366784Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7366900Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7367166Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7367280Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7367283Z 2025-08-14T21:54:38.7367362Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7367461Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7367662Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7367726Z return mod(**inputs) 2025-08-14T21:54:38.7368013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7368083Z outputs = self.mobilebert( 2025-08-14T21:54:38.7368350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7368432Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7368698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7368768Z layer_outputs = layer_module( 2025-08-14T21:54:38.7369043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7369136Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7369410Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7369531Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7369798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7369926Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7370202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7370297Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7370300Z 2025-08-14T21:54:38.7370378Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7370478Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7370676Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7370740Z return mod(**inputs) 2025-08-14T21:54:38.7371027Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7371107Z outputs = self.mobilebert( 2025-08-14T21:54:38.7371380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7371477Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7371765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7371837Z layer_outputs = layer_module( 2025-08-14T21:54:38.7372140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7372233Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7372516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7372626Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7372901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7373020Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7373024Z 2025-08-14T21:54:38.7373101Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7373202Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7373408Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7373475Z return mod(**inputs) 2025-08-14T21:54:38.7373761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7373832Z outputs = self.mobilebert( 2025-08-14T21:54:38.7374108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7374189Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7374464Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7374543Z layer_outputs = layer_module( 2025-08-14T21:54:38.7374818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7374911Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7375196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7375320Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7375596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7375726Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7376000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7376099Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7376102Z 2025-08-14T21:54:38.7376182Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7376287Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7376496Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7376561Z return mod(**inputs) 2025-08-14T21:54:38.7376848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7376921Z outputs = self.mobilebert( 2025-08-14T21:54:38.7377237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7377322Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7377625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7377732Z layer_outputs = layer_module( 2025-08-14T21:54:38.7378048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:54:38.7378179Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:54:38.7378498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7378613Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7378617Z 2025-08-14T21:54:38.7378700Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7378819Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7379025Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7379101Z return mod(**inputs) 2025-08-14T21:54:38.7379400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7379477Z outputs = self.mobilebert( 2025-08-14T21:54:38.7379880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7379968Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7380267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7380352Z layer_outputs = layer_module( 2025-08-14T21:54:38.7380663Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.7380837Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.7381141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:54:38.7381272Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:54:38.7381571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7381668Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7381674Z 2025-08-14T21:54:38.7381768Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7381875Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7382082Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7382159Z return mod(**inputs) 2025-08-14T21:54:38.7382456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7382531Z outputs = self.mobilebert( 2025-08-14T21:54:38.7382831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7382910Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7383208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7383284Z layer_outputs = layer_module( 2025-08-14T21:54:38.7383577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.7383753Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.7384067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:54:38.7384204Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:54:38.7384496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:54:38.7384657Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7384956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7385073Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7385077Z 2025-08-14T21:54:38.7385169Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7385278Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7385489Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7385567Z return mod(**inputs) 2025-08-14T21:54:38.7385861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7385936Z outputs = self.mobilebert( 2025-08-14T21:54:38.7386241Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7386318Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7386619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7386695Z layer_outputs = layer_module( 2025-08-14T21:54:38.7386987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:54:38.7387165Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:54:38.7387462Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:54:38.7387583Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:54:38.7387879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:54:38.7387973Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:54:38.7388273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7388368Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7388372Z 2025-08-14T21:54:38.7388456Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7388548Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7388629Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7388715Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7388796Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7388877Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7388964Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7389042Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7389123Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7389211Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7389321Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7389531Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7389610Z return mod(**inputs) 2025-08-14T21:54:38.7389906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7389992Z outputs = self.mobilebert( 2025-08-14T21:54:38.7390304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7390384Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7390685Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7390783Z layer_outputs = layer_module( 2025-08-14T21:54:38.7391099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:54:38.7391202Z self_attention_outputs = self.attention( 2025-08-14T21:54:38.7391493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:54:38.7391651Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:54:38.7391943Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:54:38.7392075Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7392382Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7392475Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7392478Z 2025-08-14T21:54:38.7392564Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7392668Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7392868Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7392944Z return mod(**inputs) 2025-08-14T21:54:38.7393226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7393304Z outputs = self.mobilebert( 2025-08-14T21:54:38.7393582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7393654Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7393940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7394013Z layer_outputs = layer_module( 2025-08-14T21:54:38.7394289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7394391Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7394668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7394788Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7395069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7395180Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7395184Z 2025-08-14T21:54:38.7395270Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7395371Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7395577Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7395641Z return mod(**inputs) 2025-08-14T21:54:38.7395923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7396003Z outputs = self.mobilebert( 2025-08-14T21:54:38.7396283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7396353Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7396659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7396730Z layer_outputs = layer_module( 2025-08-14T21:54:38.7397009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7397122Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7397415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7397548Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7397825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7397981Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7398256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7398347Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7398351Z 2025-08-14T21:54:38.7398440Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7398542Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7398740Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7398815Z return mod(**inputs) 2025-08-14T21:54:38.7399096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7399176Z outputs = self.mobilebert( 2025-08-14T21:54:38.7399455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7399529Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7399815Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7399886Z layer_outputs = layer_module( 2025-08-14T21:54:38.7400163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7400265Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7400540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7400660Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7400937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7401048Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7401052Z 2025-08-14T21:54:38.7401141Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7401245Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7401454Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7401519Z return mod(**inputs) 2025-08-14T21:54:38.7401798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7401878Z outputs = self.mobilebert( 2025-08-14T21:54:38.7402157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7402230Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7402515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7402584Z layer_outputs = layer_module( 2025-08-14T21:54:38.7402896Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7402992Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7403271Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7403424Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7403726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7403850Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7404137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7404226Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7404230Z 2025-08-14T21:54:38.7404315Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7404416Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7404613Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7404676Z return mod(**inputs) 2025-08-14T21:54:38.7404947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7405024Z outputs = self.mobilebert( 2025-08-14T21:54:38.7405292Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7405365Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7405647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7405715Z layer_outputs = layer_module( 2025-08-14T21:54:38.7405992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7406081Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7406349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7406464Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7406735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7406848Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7406853Z 2025-08-14T21:54:38.7406930Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7407028Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7407229Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7407295Z return mod(**inputs) 2025-08-14T21:54:38.7407571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7407650Z outputs = self.mobilebert( 2025-08-14T21:54:38.7407923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7408003Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7408277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7408357Z layer_outputs = layer_module( 2025-08-14T21:54:38.7408630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7408717Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7409005Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7409136Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7409405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7409551Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7409842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7409935Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7409955Z 2025-08-14T21:54:38.7410042Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7410144Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7410346Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7410411Z return mod(**inputs) 2025-08-14T21:54:38.7410689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7410766Z outputs = self.mobilebert( 2025-08-14T21:54:38.7411040Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7411114Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7411394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7411464Z layer_outputs = layer_module( 2025-08-14T21:54:38.7411745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:54:38.7411864Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:54:38.7412139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7412255Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7412258Z 2025-08-14T21:54:38.7412336Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7412445Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7412650Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7412715Z return mod(**inputs) 2025-08-14T21:54:38.7412991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7413061Z outputs = self.mobilebert( 2025-08-14T21:54:38.7413337Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7413415Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7413691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7413768Z layer_outputs = layer_module( 2025-08-14T21:54:38.7414045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.7414204Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.7414487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:54:38.7414608Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:54:38.7414894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7414993Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7414996Z 2025-08-14T21:54:38.7415090Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7415203Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7415394Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7415460Z return mod(**inputs) 2025-08-14T21:54:38.7415761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7415851Z outputs = self.mobilebert( 2025-08-14T21:54:38.7416135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7416223Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7416490Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7416569Z layer_outputs = layer_module( 2025-08-14T21:54:38.7416838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.7416996Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.7417262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:54:38.7417383Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:54:38.7417657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:54:38.7417775Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7418056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7418146Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7418150Z 2025-08-14T21:54:38.7418237Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7418346Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7418541Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7418607Z return mod(**inputs) 2025-08-14T21:54:38.7418893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7418964Z outputs = self.mobilebert( 2025-08-14T21:54:38.7419246Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7419320Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7419673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7419762Z layer_outputs = layer_module( 2025-08-14T21:54:38.7420061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:54:38.7420230Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:54:38.7420538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:54:38.7420656Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:54:38.7420958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:54:38.7421053Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:54:38.7421346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7421449Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7421453Z 2025-08-14T21:54:38.7421563Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7421659Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7421741Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7421821Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7421929Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7422007Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7422104Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7422193Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7422271Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7422383Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7422493Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7422691Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7422763Z return mod(**inputs) 2025-08-14T21:54:38.7423043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7423115Z outputs = self.mobilebert( 2025-08-14T21:54:38.7423394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7423467Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7423749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7423818Z layer_outputs = layer_module( 2025-08-14T21:54:38.7424093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:54:38.7424186Z self_attention_outputs = self.attention( 2025-08-14T21:54:38.7424459Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:54:38.7424582Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:54:38.7424868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:54:38.7424989Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7425273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7425364Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7425367Z 2025-08-14T21:54:38.7425446Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7425555Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7425750Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7425822Z return mod(**inputs) 2025-08-14T21:54:38.7426101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7426172Z outputs = self.mobilebert( 2025-08-14T21:54:38.7426449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7426523Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7426798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7426877Z layer_outputs = layer_module( 2025-08-14T21:54:38.7427151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7427252Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7427531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7427662Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7427945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7428074Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7428078Z 2025-08-14T21:54:38.7428164Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7428284Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7428487Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7428622Z return mod(**inputs) 2025-08-14T21:54:38.7428901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7428973Z outputs = self.mobilebert( 2025-08-14T21:54:38.7429261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7429334Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7429618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7429691Z layer_outputs = layer_module( 2025-08-14T21:54:38.7429968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7430070Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7430353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7430487Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7430763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7430887Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7431173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7431265Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7431269Z 2025-08-14T21:54:38.7431347Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7431458Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7431657Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7431731Z return mod(**inputs) 2025-08-14T21:54:38.7432015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7432086Z outputs = self.mobilebert( 2025-08-14T21:54:38.7432374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7432447Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7432731Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7432803Z layer_outputs = layer_module( 2025-08-14T21:54:38.7433080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7433179Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7433462Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7433572Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7433875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7433987Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7433990Z 2025-08-14T21:54:38.7434076Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7434177Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7434394Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7434484Z return mod(**inputs) 2025-08-14T21:54:38.7434768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7434857Z outputs = self.mobilebert( 2025-08-14T21:54:38.7435143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7435215Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7435502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7435572Z layer_outputs = layer_module( 2025-08-14T21:54:38.7435847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7435949Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7436224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7436353Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7436629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7436747Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7437033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7437121Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7437125Z 2025-08-14T21:54:38.7437211Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7437540Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7437740Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7437816Z return mod(**inputs) 2025-08-14T21:54:38.7438095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7438168Z outputs = self.mobilebert( 2025-08-14T21:54:38.7438453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7438525Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7438814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7438884Z layer_outputs = layer_module( 2025-08-14T21:54:38.7439159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7439259Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7439536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7439655Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7439933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7440043Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7440046Z 2025-08-14T21:54:38.7440135Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7440256Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7440457Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7440529Z return mod(**inputs) 2025-08-14T21:54:38.7440805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7440903Z outputs = self.mobilebert( 2025-08-14T21:54:38.7441193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7441282Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7441565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7441635Z layer_outputs = layer_module( 2025-08-14T21:54:38.7442059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7442158Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7442434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7442572Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7442848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7442968Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7443254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7443346Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7443350Z 2025-08-14T21:54:38.7443436Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7443541Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7443737Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7443812Z return mod(**inputs) 2025-08-14T21:54:38.7444091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7444173Z outputs = self.mobilebert( 2025-08-14T21:54:38.7444450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7444526Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7444812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7444884Z layer_outputs = layer_module( 2025-08-14T21:54:38.7445164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:54:38.7445294Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:54:38.7445570Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7445691Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7445695Z 2025-08-14T21:54:38.7445776Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7445877Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7446084Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7446152Z return mod(**inputs) 2025-08-14T21:54:38.7446439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7446509Z outputs = self.mobilebert( 2025-08-14T21:54:38.7446860Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7446945Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7447225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7447321Z layer_outputs = layer_module( 2025-08-14T21:54:38.7447642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.7447803Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.7448115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:54:38.7448238Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:54:38.7448513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7448614Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7448618Z 2025-08-14T21:54:38.7448697Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7448812Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7449009Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7449076Z return mod(**inputs) 2025-08-14T21:54:38.7449363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7449435Z outputs = self.mobilebert( 2025-08-14T21:54:38.7449712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7449792Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7450071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7450148Z layer_outputs = layer_module( 2025-08-14T21:54:38.7450424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.7450582Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.7450867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:54:38.7450989Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:54:38.7451270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:54:38.7451389Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7451667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7451767Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7451773Z 2025-08-14T21:54:38.7451852Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7451954Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7452157Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7452224Z return mod(**inputs) 2025-08-14T21:54:38.7452508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7452580Z outputs = self.mobilebert( 2025-08-14T21:54:38.7452856Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7452958Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7453240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7453324Z layer_outputs = layer_module( 2025-08-14T21:54:38.7453645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:54:38.7453832Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:54:38.7454139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:54:38.7454275Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:54:38.7454577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:54:38.7454677Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:54:38.7454980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7455082Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7455088Z 2025-08-14T21:54:38.7455171Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7455253Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7455344Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7455423Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7455511Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7455593Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7455671Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7455756Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7455835Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7455913Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7456029Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7456239Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7456309Z return mod(**inputs) 2025-08-14T21:54:38.7456610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7456685Z outputs = self.mobilebert( 2025-08-14T21:54:38.7456995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7457072Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7457377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7457460Z layer_outputs = layer_module( 2025-08-14T21:54:38.7457764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:54:38.7457855Z self_attention_outputs = self.attention( 2025-08-14T21:54:38.7458153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:54:38.7458284Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:54:38.7458586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:54:38.7458714Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7459006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7459109Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7459113Z 2025-08-14T21:54:38.7459194Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7459326Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7459592Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7459670Z return mod(**inputs) 2025-08-14T21:54:38.7459986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7460085Z outputs = self.mobilebert( 2025-08-14T21:54:38.7460394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7460483Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7460792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7460876Z layer_outputs = layer_module( 2025-08-14T21:54:38.7461167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7461267Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7461566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7461685Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7461988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7462104Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7462109Z 2025-08-14T21:54:38.7462192Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7462309Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7462517Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7462588Z return mod(**inputs) 2025-08-14T21:54:38.7462891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7462964Z outputs = self.mobilebert( 2025-08-14T21:54:38.7463262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7463341Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7463632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7463715Z layer_outputs = layer_module( 2025-08-14T21:54:38.7464007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7464113Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7464417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7464548Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7464844Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7464973Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7465271Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7465365Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7465371Z 2025-08-14T21:54:38.7465452Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7465565Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7465769Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7465838Z return mod(**inputs) 2025-08-14T21:54:38.7466160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7466237Z outputs = self.mobilebert( 2025-08-14T21:54:38.7466534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7466629Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7466952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7467037Z layer_outputs = layer_module( 2025-08-14T21:54:38.7467346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7467444Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7467746Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7467863Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7468160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7468277Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7468281Z 2025-08-14T21:54:38.7468365Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7468481Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7468693Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7468783Z return mod(**inputs) 2025-08-14T21:54:38.7469062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7469135Z outputs = self.mobilebert( 2025-08-14T21:54:38.7469419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7469492Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7469768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7469846Z layer_outputs = layer_module( 2025-08-14T21:54:38.7470122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7470224Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7470500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7470623Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7470911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7471031Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7471310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7471401Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7471404Z 2025-08-14T21:54:38.7471483Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7471592Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7471787Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7471854Z return mod(**inputs) 2025-08-14T21:54:38.7472138Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7472207Z outputs = self.mobilebert( 2025-08-14T21:54:38.7472506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7472583Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7472856Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7472963Z layer_outputs = layer_module( 2025-08-14T21:54:38.7473251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7473352Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7473649Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7473757Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7474043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7474151Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7474155Z 2025-08-14T21:54:38.7474241Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7474346Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7474543Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7474618Z return mod(**inputs) 2025-08-14T21:54:38.7474894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7474965Z outputs = self.mobilebert( 2025-08-14T21:54:38.7475245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7475316Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7475600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7475668Z layer_outputs = layer_module( 2025-08-14T21:54:38.7475939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7476041Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7476319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7476439Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7476722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7476841Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7477124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7477215Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7477218Z 2025-08-14T21:54:38.7477296Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7477409Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7477606Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7477682Z return mod(**inputs) 2025-08-14T21:54:38.7477970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7478047Z outputs = self.mobilebert( 2025-08-14T21:54:38.7478346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7478423Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7478797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7478881Z layer_outputs = layer_module( 2025-08-14T21:54:38.7479173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:54:38.7479328Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:54:38.7479640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7479760Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7479781Z 2025-08-14T21:54:38.7479874Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7479983Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7480205Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7480275Z return mod(**inputs) 2025-08-14T21:54:38.7480569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7480653Z outputs = self.mobilebert( 2025-08-14T21:54:38.7480946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7481028Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7481335Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7481412Z layer_outputs = layer_module( 2025-08-14T21:54:38.7481711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.7481883Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.7482180Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:54:38.7482320Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:54:38.7482615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7482719Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7482724Z 2025-08-14T21:54:38.7482807Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7482914Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7483135Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7483200Z return mod(**inputs) 2025-08-14T21:54:38.7483477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7483561Z outputs = self.mobilebert( 2025-08-14T21:54:38.7483851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7483937Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7484230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7484306Z layer_outputs = layer_module( 2025-08-14T21:54:38.7484608Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.7484773Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.7485071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:54:38.7485201Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:54:38.7485515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:54:38.7485654Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7485963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7486082Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7486086Z 2025-08-14T21:54:38.7486170Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7486277Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7486509Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7486578Z return mod(**inputs) 2025-08-14T21:54:38.7486875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7486960Z outputs = self.mobilebert( 2025-08-14T21:54:38.7487253Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7487338Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7487633Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7487707Z layer_outputs = layer_module( 2025-08-14T21:54:38.7488005Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:54:38.7488176Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:54:38.7488477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:54:38.7488594Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:54:38.7488886Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:54:38.7488983Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:54:38.7489281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7489374Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7489378Z 2025-08-14T21:54:38.7489467Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7489551Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7489637Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7489717Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7489796Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7489881Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7489959Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7490037Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7490123Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7490200Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7490307Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7490524Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7490595Z return mod(**inputs) 2025-08-14T21:54:38.7490898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7490974Z outputs = self.mobilebert( 2025-08-14T21:54:38.7491269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7491356Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7491671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7491756Z layer_outputs = layer_module( 2025-08-14T21:54:38.7492047Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:54:38.7492163Z self_attention_outputs = self.attention( 2025-08-14T21:54:38.7492479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:54:38.7492612Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:54:38.7492920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:54:38.7493058Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7493351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7493453Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7493457Z 2025-08-14T21:54:38.7493539Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7493646Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7493867Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7493937Z return mod(**inputs) 2025-08-14T21:54:38.7494242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7494317Z outputs = self.mobilebert( 2025-08-14T21:54:38.7494608Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7494693Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7494986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7495061Z layer_outputs = layer_module( 2025-08-14T21:54:38.7495360Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7495461Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7495763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7495880Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7496176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7496300Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7496303Z 2025-08-14T21:54:38.7496388Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7496507Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7496717Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7496786Z return mod(**inputs) 2025-08-14T21:54:38.7497091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7497166Z outputs = self.mobilebert( 2025-08-14T21:54:38.7497457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7497543Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7497837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7497916Z layer_outputs = layer_module( 2025-08-14T21:54:38.7498231Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7498332Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7498633Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7498783Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7499098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7499225Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7499627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7499741Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7499745Z 2025-08-14T21:54:38.7499832Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7499946Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7500175Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7500248Z return mod(**inputs) 2025-08-14T21:54:38.7500575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7500653Z outputs = self.mobilebert( 2025-08-14T21:54:38.7500959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7501058Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7501355Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7501430Z layer_outputs = layer_module( 2025-08-14T21:54:38.7501742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7501846Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7502159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7502282Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7502590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7502719Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7502725Z 2025-08-14T21:54:38.7502812Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7502932Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7503149Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7503220Z return mod(**inputs) 2025-08-14T21:54:38.7503535Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7503611Z outputs = self.mobilebert( 2025-08-14T21:54:38.7503912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7504000Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7504303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7504387Z layer_outputs = layer_module( 2025-08-14T21:54:38.7504692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7504791Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7505130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7505266Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7505588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7505737Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7506055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7506163Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7506185Z 2025-08-14T21:54:38.7506270Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7506389Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7506606Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7506677Z return mod(**inputs) 2025-08-14T21:54:38.7506986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7507063Z outputs = self.mobilebert( 2025-08-14T21:54:38.7507362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7507451Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7507757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7507843Z layer_outputs = layer_module( 2025-08-14T21:54:38.7508147Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7508247Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7508557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7508677Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7508995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7509116Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7509120Z 2025-08-14T21:54:38.7509209Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7509327Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7509543Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7509616Z return mod(**inputs) 2025-08-14T21:54:38.7509933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7510012Z outputs = self.mobilebert( 2025-08-14T21:54:38.7510330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7510410Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7510721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7510807Z layer_outputs = layer_module( 2025-08-14T21:54:38.7511125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7511226Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7511550Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7511682Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7512019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7512151Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7512463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7512588Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7512592Z 2025-08-14T21:54:38.7512706Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7512830Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7513047Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7513138Z return mod(**inputs) 2025-08-14T21:54:38.7513449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7513527Z outputs = self.mobilebert( 2025-08-14T21:54:38.7513847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7513930Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7514232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7514316Z layer_outputs = layer_module( 2025-08-14T21:54:38.7514616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:54:38.7514740Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:54:38.7515050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7515166Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7515169Z 2025-08-14T21:54:38.7515261Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7515370Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7515576Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7515654Z return mod(**inputs) 2025-08-14T21:54:38.7515949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7516025Z outputs = self.mobilebert( 2025-08-14T21:54:38.7516334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7516412Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7516719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7516791Z layer_outputs = layer_module( 2025-08-14T21:54:38.7517094Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.7517267Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.7517568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:54:38.7517709Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:54:38.7518010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7518109Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7518113Z 2025-08-14T21:54:38.7518204Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7518313Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7518518Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7518614Z return mod(**inputs) 2025-08-14T21:54:38.7518907Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7519011Z outputs = self.mobilebert( 2025-08-14T21:54:38.7519307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7519401Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7519712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7519819Z layer_outputs = layer_module( 2025-08-14T21:54:38.7520125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.7520291Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.7520583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:54:38.7520722Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:54:38.7521016Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:54:38.7521151Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7521443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7521539Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7521543Z 2025-08-14T21:54:38.7521635Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7521742Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7521951Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7522029Z return mod(**inputs) 2025-08-14T21:54:38.7522327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7522410Z outputs = self.mobilebert( 2025-08-14T21:54:38.7522702Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7522778Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7523077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7523154Z layer_outputs = layer_module( 2025-08-14T21:54:38.7523443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:54:38.7523619Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:54:38.7523914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:54:38.7524036Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:54:38.7524334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:54:38.7524425Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:54:38.7524727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7524830Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7524833Z 2025-08-14T21:54:38.7524918Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7524994Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7525070Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7525173Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7525249Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7525324Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7525407Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7525499Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7525581Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7525670Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7525776Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7525982Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7526065Z return mod(**inputs) 2025-08-14T21:54:38.7526343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7526424Z outputs = self.mobilebert( 2025-08-14T21:54:38.7526704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7526788Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7527063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7527134Z layer_outputs = layer_module( 2025-08-14T21:54:38.7527415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:54:38.7527502Z self_attention_outputs = self.attention( 2025-08-14T21:54:38.7527777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:54:38.7527906Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:54:38.7528188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:54:38.7528317Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7528590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7528684Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7528688Z 2025-08-14T21:54:38.7528775Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7528877Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7529082Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7529149Z return mod(**inputs) 2025-08-14T21:54:38.7529426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7529503Z outputs = self.mobilebert( 2025-08-14T21:54:38.7529777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7529849Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7530129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7530200Z layer_outputs = layer_module( 2025-08-14T21:54:38.7530484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7530579Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7530854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7530971Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7531614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7531743Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7531747Z 2025-08-14T21:54:38.7531833Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7531943Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7532183Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7532326Z return mod(**inputs) 2025-08-14T21:54:38.7532607Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7532709Z outputs = self.mobilebert( 2025-08-14T21:54:38.7532983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7533063Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7533341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7533413Z layer_outputs = layer_module( 2025-08-14T21:54:38.7533715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7533815Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7534118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7534249Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7534543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7534678Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7534971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7535065Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7535078Z 2025-08-14T21:54:38.7535162Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7535273Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7535490Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7535562Z return mod(**inputs) 2025-08-14T21:54:38.7535859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7535944Z outputs = self.mobilebert( 2025-08-14T21:54:38.7536235Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7536319Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7536611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7536686Z layer_outputs = layer_module( 2025-08-14T21:54:38.7536986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7537086Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7537380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7537506Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7537800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7537921Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7537925Z 2025-08-14T21:54:38.7538007Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7538136Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7538354Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7538421Z return mod(**inputs) 2025-08-14T21:54:38.7538741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7538830Z outputs = self.mobilebert( 2025-08-14T21:54:38.7539123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7542904Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7543264Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7543342Z layer_outputs = layer_module( 2025-08-14T21:54:38.7543648Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7543750Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7544044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7544186Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7544476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7544608Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7544938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7545033Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7545038Z 2025-08-14T21:54:38.7545129Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7545239Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7545449Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7545527Z return mod(**inputs) 2025-08-14T21:54:38.7545823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7545897Z outputs = self.mobilebert( 2025-08-14T21:54:38.7546192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7546275Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7546569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7546643Z layer_outputs = layer_module( 2025-08-14T21:54:38.7546932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7547039Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7547326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7547452Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7547741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7547857Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7547862Z 2025-08-14T21:54:38.7547955Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7548063Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7548268Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7548343Z return mod(**inputs) 2025-08-14T21:54:38.7548692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7548778Z outputs = self.mobilebert( 2025-08-14T21:54:38.7549122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7549199Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7549498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7549606Z layer_outputs = layer_module( 2025-08-14T21:54:38.7549993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7550096Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7550391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7550530Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7550824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7550954Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7551254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7551349Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7551356Z 2025-08-14T21:54:38.7551446Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7551555Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7551772Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7551855Z return mod(**inputs) 2025-08-14T21:54:38.7552153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7552237Z outputs = self.mobilebert( 2025-08-14T21:54:38.7552532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7552608Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7552910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7552986Z layer_outputs = layer_module( 2025-08-14T21:54:38.7553277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:54:38.7553404Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:54:38.7553681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7553798Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7553802Z 2025-08-14T21:54:38.7553880Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7553984Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7554191Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7554258Z return mod(**inputs) 2025-08-14T21:54:38.7554546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7554620Z outputs = self.mobilebert( 2025-08-14T21:54:38.7554894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7554975Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7555270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7555342Z layer_outputs = layer_module( 2025-08-14T21:54:38.7555624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.7555801Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.7556087Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:54:38.7556268Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:54:38.7556551Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7556652Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7556656Z 2025-08-14T21:54:38.7556735Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7556849Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7557043Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7557109Z return mod(**inputs) 2025-08-14T21:54:38.7557396Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7557466Z outputs = self.mobilebert( 2025-08-14T21:54:38.7557755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7557842Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7558133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7558217Z layer_outputs = layer_module( 2025-08-14T21:54:38.7558511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.7558675Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.7558977Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:54:38.7559114Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:54:38.7559396Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:54:38.7559519Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7559792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7559890Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7559894Z 2025-08-14T21:54:38.7559972Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7560084Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7560281Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7560348Z return mod(**inputs) 2025-08-14T21:54:38.7560636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7560705Z outputs = self.mobilebert( 2025-08-14T21:54:38.7560994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7561079Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7561370Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7561471Z layer_outputs = layer_module( 2025-08-14T21:54:38.7561765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 496, in forward 2025-08-14T21:54:38.7561933Z query_tensor, key_tensor, value_tensor, layer_input = self.bottleneck(hidden_states) 2025-08-14T21:54:38.7562265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 444, in forward 2025-08-14T21:54:38.7562380Z shared_attention_input = self.attention(hidden_states) 2025-08-14T21:54:38.7562713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 410, in forward 2025-08-14T21:54:38.7562825Z layer_input = self.LayerNorm(layer_input) 2025-08-14T21:54:38.7563115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7563220Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7563224Z 2025-08-14T21:54:38.7563318Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7563395Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7563479Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7563556Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7563637Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7563712Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7563786Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7563868Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7563943Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7564021Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7564132Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7564330Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7564397Z return mod(**inputs) 2025-08-14T21:54:38.7564693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7564769Z outputs = self.mobilebert( 2025-08-14T21:54:38.7565066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7565146Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7565437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7565520Z layer_outputs = layer_module( 2025-08-14T21:54:38.7565813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 500, in forward 2025-08-14T21:54:38.7565911Z self_attention_outputs = self.attention( 2025-08-14T21:54:38.7566206Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 344, in forward 2025-08-14T21:54:38.7566338Z attention_output = self.output(self_outputs[0], layer_input) 2025-08-14T21:54:38.7566639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 295, in forward 2025-08-14T21:54:38.7566769Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7567060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7567161Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7567166Z 2025-08-14T21:54:38.7567249Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7567365Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7567571Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7567640Z return mod(**inputs) 2025-08-14T21:54:38.7567963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7568046Z outputs = self.mobilebert( 2025-08-14T21:54:38.7568330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7568420Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7568704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7568783Z layer_outputs = layer_module( 2025-08-14T21:54:38.7569091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7569192Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7569492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7569607Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7569907Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7570025Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7570028Z 2025-08-14T21:54:38.7570110Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7570225Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7570432Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7570511Z return mod(**inputs) 2025-08-14T21:54:38.7570806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7570878Z outputs = self.mobilebert( 2025-08-14T21:54:38.7571183Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7571260Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7571548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7571632Z layer_outputs = layer_module( 2025-08-14T21:54:38.7571947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7572052Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7572370Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7572499Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7572810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7572935Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7573257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7573355Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7573358Z 2025-08-14T21:54:38.7573440Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7573556Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7573764Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7573834Z return mod(**inputs) 2025-08-14T21:54:38.7574149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7574221Z outputs = self.mobilebert( 2025-08-14T21:54:38.7574572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7574650Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7574964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7575068Z layer_outputs = layer_module( 2025-08-14T21:54:38.7575384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7575488Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7575844Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7575961Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7576278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7576394Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7576398Z 2025-08-14T21:54:38.7576480Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7576597Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7576807Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7576884Z return mod(**inputs) 2025-08-14T21:54:38.7577192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7577271Z outputs = self.mobilebert( 2025-08-14T21:54:38.7577587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7577663Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7577981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7578063Z layer_outputs = layer_module( 2025-08-14T21:54:38.7578380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7578488Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7578806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7578936Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7579261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7579386Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7579803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7579905Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7579910Z 2025-08-14T21:54:38.7579992Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7580113Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7580322Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7580402Z return mod(**inputs) 2025-08-14T21:54:38.7580704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7580786Z outputs = self.mobilebert( 2025-08-14T21:54:38.7581090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7581170Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7581524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7581612Z layer_outputs = layer_module( 2025-08-14T21:54:38.7581902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7582030Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7582328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 469, in forward 2025-08-14T21:54:38.7582446Z intermediate_output = self.intermediate(hidden_states) 2025-08-14T21:54:38.7582781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7582899Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7582903Z 2025-08-14T21:54:38.7582999Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7583113Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7583323Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7583399Z return mod(**inputs) 2025-08-14T21:54:38.7583701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7583775Z outputs = self.mobilebert( 2025-08-14T21:54:38.7584145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7584239Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7584540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7584617Z layer_outputs = layer_module( 2025-08-14T21:54:38.7584940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 515, in forward 2025-08-14T21:54:38.7585045Z attention_output = ffn_module(attention_output) 2025-08-14T21:54:38.7585344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 470, in forward 2025-08-14T21:54:38.7585466Z layer_outputs = self.output(intermediate_output, hidden_states) 2025-08-14T21:54:38.7585747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 458, in forward 2025-08-14T21:54:38.7585869Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7586151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7586242Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7586245Z 2025-08-14T21:54:38.7586325Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7586434Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7586630Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7586705Z return mod(**inputs) 2025-08-14T21:54:38.7586985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7587055Z outputs = self.mobilebert( 2025-08-14T21:54:38.7587341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7587419Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7587695Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7587771Z layer_outputs = layer_module( 2025-08-14T21:54:38.7588066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 518, in forward 2025-08-14T21:54:38.7588196Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:54:38.7588471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 360, in forward 2025-08-14T21:54:38.7588603Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:54:38.7588607Z 2025-08-14T21:54:38.7588691Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7588794Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7589035Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7589102Z return mod(**inputs) 2025-08-14T21:54:38.7589377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7589457Z outputs = self.mobilebert( 2025-08-14T21:54:38.7589729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7589801Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7590081Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7590151Z layer_outputs = layer_module( 2025-08-14T21:54:38.7590428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.7590587Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.7590859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 397, in forward 2025-08-14T21:54:38.7590989Z layer_output = self.LayerNorm(layer_output + residual_tensor_1) 2025-08-14T21:54:38.7591262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7591358Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7591363Z 2025-08-14T21:54:38.7591439Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7591539Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7591744Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7591808Z return mod(**inputs) 2025-08-14T21:54:38.7592083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-08-14T21:54:38.7592160Z outputs = self.mobilebert( 2025-08-14T21:54:38.7592434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 794, in forward 2025-08-14T21:54:38.7592515Z encoder_outputs = self.encoder( 2025-08-14T21:54:38.7592785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 557, in forward 2025-08-14T21:54:38.7592854Z layer_outputs = layer_module( 2025-08-14T21:54:38.7593132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 519, in forward 2025-08-14T21:54:38.7593287Z layer_output = self.output(intermediate_output, attention_output, hidden_states) 2025-08-14T21:54:38.7593566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 398, in forward 2025-08-14T21:54:38.7593688Z layer_output = self.bottleneck(layer_output, residual_tensor_2) 2025-08-14T21:54:38.7593961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 374, in forward 2025-08-14T21:54:38.7594104Z layer_outputs = self.LayerNorm(layer_outputs + residual_tensor) 2025-08-14T21:54:38.7594380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 138, in forward 2025-08-14T21:54:38.7594477Z return input_tensor * self.weight + self.bias 2025-08-14T21:54:38.7594496Z 2025-08-14T21:54:38.7594576Z cudagraph partition due to non gpu ops 2025-08-14T21:54:38.7594679Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7594882Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7594947Z return mod(**inputs) 2025-08-14T21:54:38.7595283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1274, in forward 2025-08-14T21:54:38.7595400Z start_loss = loss_fct(start_logits, start_positions) 2025-08-14T21:54:38.7595403Z 2025-08-14T21:54:38.7595505Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:54:38.7595709Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:54:38.7595774Z return mod(**inputs) 2025-08-14T21:54:38.7596050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1275, in forward 2025-08-14T21:54:38.7596153Z end_loss = loss_fct(end_logits, end_positions) 2025-08-14T21:54:38.7596157Z 2025-08-14T21:54:52.6951048Z Compilation time (from dynamo_timed): 47.831213908 2025-08-14T21:54:52.6953946Z pass 2025-08-14T21:54:52.6954864Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:54:52.6960617Z TIMING: _recursive_pre_grad_passes:0.16975 _recursive_joint_graph_passes:1.43025 _recursive_post_grad_passes:0.2257 async_compile.wait:0.28283 code_gen:10.25639 inductor_compile:15.66402 backend_compile:34.65722 gc:0.00054 entire_frame_compile:47.83121 total_wall_time:47.83121 2025-08-14T21:54:52.6961855Z STATS: call_* op count: 1453 | FakeTensorMode.__torch_dispatch__:103267 | FakeTensor.__torch_dispatch__:12538 | ProxyTorchDispatchMode.__torch_dispatch__:23231 2025-08-14T21:54:52.6962427Z Dynamo produced 1 graphs covering 1453 ops with 0 graph breaks (0 unique) 2025-08-14T21:54:59.0969382Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:54:59.0970384Z from pkg_resources import resource_filename 2025-08-14T21:54:59.7144985Z 2025-08-14T21:55:01.7645535Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:55:01.7645848Z loading model: 0it [00:02, ?it/s] 2025-08-14T21:55:01.7656635Z cpu eval OPTForCausalLM 2025-08-14T21:55:03.5363387Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:55:04.5785735Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:55:05.5809655Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:55:15.4810785Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4812892Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4813139Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4813362Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4813591Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4819435Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4820085Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4820342Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4820577Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4820826Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4821060Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4821581Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4821822Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4822061Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4822297Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4822619Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4822906Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4823129Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4823354Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4823626Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:15.4824223Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:15.4824611Z return mod(**inputs) 2025-08-14T21:55:15.4825015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4825428Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4825871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:55:15.4826316Z outputs = self.model.decoder( 2025-08-14T21:55:15.4826715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4827125Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4827553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:55:15.4827981Z layer_outputs = decoder_layer( 2025-08-14T21:55:15.4828382Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:15.4828785Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:15.4829213Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:55:15.4829674Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:55:15.4830128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:55:15.4830575Z attn_output, attn_weights = attention_interface( 2025-08-14T21:55:15.4831083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:55:15.4831635Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:55:15.4831861Z 2025-08-14T21:55:15.4831993Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:15.4832414Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:15.4832787Z return mod(**inputs) 2025-08-14T21:55:15.4833163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4833557Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4833985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:55:15.4834517Z outputs = self.model.decoder( 2025-08-14T21:55:15.4834913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4835299Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4835713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:55:15.4836139Z layer_outputs = decoder_layer( 2025-08-14T21:55:15.4836533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:15.4836929Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:15.4837381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:55:15.4837827Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:55:15.4838270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:55:15.4838726Z attn_output, attn_weights = attention_interface( 2025-08-14T21:55:15.4839206Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:55:15.4839712Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:55:15.4839891Z 2025-08-14T21:55:15.4840005Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4840265Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4840531Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:15.4840933Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:15.4841308Z return mod(**inputs) 2025-08-14T21:55:15.4841682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4842395Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4842823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:55:15.4843246Z outputs = self.model.decoder( 2025-08-14T21:55:15.4843636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4844027Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4844435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:55:15.4844851Z layer_outputs = decoder_layer( 2025-08-14T21:55:15.4845234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:15.4845650Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:15.4846067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-08-14T21:55:15.4846576Z hidden_states = self.activation_fn(hidden_states) 2025-08-14T21:55:15.4846757Z 2025-08-14T21:55:15.4846856Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4847088Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4847319Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4847548Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4847771Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4848009Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4848238Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4848455Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4848712Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:15.4849113Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:15.4849471Z return mod(**inputs) 2025-08-14T21:55:15.4849823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4850217Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4850628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:55:15.4851042Z outputs = self.model.decoder( 2025-08-14T21:55:15.4851425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4851821Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4852229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:55:15.4852639Z layer_outputs = decoder_layer( 2025-08-14T21:55:15.4853072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:15.4853477Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:15.4854014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:55:15.4854502Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:55:15.4854940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:55:15.4855386Z attn_output, attn_weights = attention_interface( 2025-08-14T21:55:15.4855929Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:55:15.4856454Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:55:15.4856663Z 2025-08-14T21:55:15.4856776Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:15.4857164Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:15.4857507Z return mod(**inputs) 2025-08-14T21:55:15.4857864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4858255Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4858649Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:55:15.4859059Z outputs = self.model.decoder( 2025-08-14T21:55:15.4859438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4859922Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4860331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:55:15.4860750Z layer_outputs = decoder_layer( 2025-08-14T21:55:15.4861147Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:15.4861544Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:15.4861972Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:55:15.4862411Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:55:15.4862845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:55:15.4863273Z attn_output, attn_weights = attention_interface( 2025-08-14T21:55:15.4863754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:55:15.4864248Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:55:15.4864423Z 2025-08-14T21:55:15.4864519Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4864747Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4865007Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:15.4865393Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:15.4865736Z return mod(**inputs) 2025-08-14T21:55:15.4866094Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4866478Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4866879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:55:15.4867278Z outputs = self.model.decoder( 2025-08-14T21:55:15.4867648Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4868031Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4868454Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:55:15.4868867Z layer_outputs = decoder_layer( 2025-08-14T21:55:15.4869243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:15.4869652Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:15.4870052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-08-14T21:55:15.4870480Z hidden_states = self.activation_fn(hidden_states) 2025-08-14T21:55:15.4870643Z 2025-08-14T21:55:15.4870764Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4871012Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4871246Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4871513Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4871737Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4871957Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4872176Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4872396Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4872640Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:15.4873041Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:15.4873392Z return mod(**inputs) 2025-08-14T21:55:15.4873738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4874139Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4874542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:55:15.4874951Z outputs = self.model.decoder( 2025-08-14T21:55:15.4875323Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4875701Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4876100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:55:15.4876497Z layer_outputs = decoder_layer( 2025-08-14T21:55:15.4876871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:15.4877258Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:15.4877657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:55:15.4878087Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:55:15.4878524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:55:15.4878966Z attn_output, attn_weights = attention_interface( 2025-08-14T21:55:15.4879445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:55:15.4879954Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:55:15.4880159Z 2025-08-14T21:55:15.4880268Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:15.4880657Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:15.4881005Z return mod(**inputs) 2025-08-14T21:55:15.4881356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4881733Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4882137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:55:15.4882535Z outputs = self.model.decoder( 2025-08-14T21:55:15.4882910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4883312Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4883718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:55:15.4884121Z layer_outputs = decoder_layer( 2025-08-14T21:55:15.4884521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:15.4884905Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:15.4885303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:55:15.4885770Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:55:15.4886179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:55:15.4886578Z attn_output, attn_weights = attention_interface( 2025-08-14T21:55:15.4887047Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:55:15.4887527Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:55:15.4887694Z 2025-08-14T21:55:15.4887787Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4887998Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4888238Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:15.4888631Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:15.4888961Z return mod(**inputs) 2025-08-14T21:55:15.4889302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4889684Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4890085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:55:15.4890495Z outputs = self.model.decoder( 2025-08-14T21:55:15.4890866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4891243Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4891656Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:55:15.4892027Z layer_outputs = decoder_layer( 2025-08-14T21:55:15.4892378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:15.4892740Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:15.4893120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-08-14T21:55:15.4893522Z hidden_states = self.activation_fn(hidden_states) 2025-08-14T21:55:15.4893682Z 2025-08-14T21:55:15.4893763Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4893976Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4894181Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4894388Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4894593Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4894798Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4895004Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4895212Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4895446Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:15.4895808Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:15.4896142Z return mod(**inputs) 2025-08-14T21:55:15.4896476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4896829Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4897236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:55:15.4897625Z outputs = self.model.decoder( 2025-08-14T21:55:15.4897973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4898340Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4898711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:55:15.4899086Z layer_outputs = decoder_layer( 2025-08-14T21:55:15.4899428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:15.4899933Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:15.4900336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:55:15.4900779Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:55:15.4901219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:55:15.4901659Z attn_output, attn_weights = attention_interface( 2025-08-14T21:55:15.4902134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:55:15.4902643Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:55:15.4902855Z 2025-08-14T21:55:15.4902967Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:15.4903416Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:15.4903788Z return mod(**inputs) 2025-08-14T21:55:15.4904143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4904534Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4904947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:55:15.4905360Z outputs = self.model.decoder( 2025-08-14T21:55:15.4905729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4906116Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4906521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:55:15.4906927Z layer_outputs = decoder_layer( 2025-08-14T21:55:15.4907312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:15.4907706Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:15.4908115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:55:15.4908544Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:55:15.4908981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:55:15.4909415Z attn_output, attn_weights = attention_interface( 2025-08-14T21:55:15.4909894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:55:15.4910391Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:55:15.4910577Z 2025-08-14T21:55:15.4910665Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4910900Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4911148Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:15.4911513Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:15.4911839Z return mod(**inputs) 2025-08-14T21:55:15.4912187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4912545Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4912920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:55:15.4913314Z outputs = self.model.decoder( 2025-08-14T21:55:15.4913652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4914006Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4914398Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:55:15.4914792Z layer_outputs = decoder_layer( 2025-08-14T21:55:15.4915135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:15.4915497Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:15.4915876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-08-14T21:55:15.4916268Z hidden_states = self.activation_fn(hidden_states) 2025-08-14T21:55:15.4916427Z 2025-08-14T21:55:15.4916510Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4916723Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4916934Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4917141Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4917352Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4917558Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4917760Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4917970Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4918204Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:15.4918560Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:15.4918891Z return mod(**inputs) 2025-08-14T21:55:15.4919217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4919571Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4919934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:55:15.4920313Z outputs = self.model.decoder( 2025-08-14T21:55:15.4920658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4920998Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4921372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:55:15.4921750Z layer_outputs = decoder_layer( 2025-08-14T21:55:15.4922096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:15.4922447Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:15.4922821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:55:15.4923216Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:55:15.4923603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:55:15.4923998Z attn_output, attn_weights = attention_interface( 2025-08-14T21:55:15.4924440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:55:15.4924921Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:55:15.4925101Z 2025-08-14T21:55:15.4925206Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:15.4925568Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:15.4925922Z return mod(**inputs) 2025-08-14T21:55:15.4926257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4926612Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4927011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:55:15.4927389Z outputs = self.model.decoder( 2025-08-14T21:55:15.4927727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4928095Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4928486Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:55:15.4928852Z layer_outputs = decoder_layer( 2025-08-14T21:55:15.4929189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:15.4929542Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:15.4929912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:55:15.4930297Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:55:15.4930686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:55:15.4931075Z attn_output, attn_weights = attention_interface( 2025-08-14T21:55:15.4931506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:55:15.4931945Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:55:15.4932109Z 2025-08-14T21:55:15.4932187Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4932393Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4932628Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:15.4932978Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:15.4933300Z return mod(**inputs) 2025-08-14T21:55:15.4933628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4933967Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4934336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:55:15.4934703Z outputs = self.model.decoder( 2025-08-14T21:55:15.4935042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4935375Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4935740Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:55:15.4936109Z layer_outputs = decoder_layer( 2025-08-14T21:55:15.4936442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:15.4936795Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:15.4937168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-08-14T21:55:15.4937558Z hidden_states = self.activation_fn(hidden_states) 2025-08-14T21:55:15.4937704Z 2025-08-14T21:55:15.4937781Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4937987Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4938195Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4938396Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4938602Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4938808Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4939013Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4940101Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4940384Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:15.4940782Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:15.4941173Z return mod(**inputs) 2025-08-14T21:55:15.4941529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4942117Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4942526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:55:15.4943063Z outputs = self.model.decoder( 2025-08-14T21:55:15.4943440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4943794Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4944174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:55:15.4944558Z layer_outputs = decoder_layer( 2025-08-14T21:55:15.4944913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:15.4945282Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:15.4945655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:55:15.4946062Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:55:15.4946467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:55:15.4946865Z attn_output, attn_weights = attention_interface( 2025-08-14T21:55:15.4947312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:55:15.4947793Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:55:15.4947977Z 2025-08-14T21:55:15.4948088Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:15.4948451Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:15.4948788Z return mod(**inputs) 2025-08-14T21:55:15.4949120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4949473Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4949857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:55:15.4950240Z outputs = self.model.decoder( 2025-08-14T21:55:15.4950590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4950937Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4951321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:55:15.4951697Z layer_outputs = decoder_layer( 2025-08-14T21:55:15.4952050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:15.4952409Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:15.4952789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:55:15.4953196Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:55:15.4953593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:55:15.4954011Z attn_output, attn_weights = attention_interface( 2025-08-14T21:55:15.4954487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:55:15.4954949Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:55:15.4955111Z 2025-08-14T21:55:15.4955194Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4955405Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4955709Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:15.4956060Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:15.4956389Z return mod(**inputs) 2025-08-14T21:55:15.4956718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4957110Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4957485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:55:15.4957865Z outputs = self.model.decoder( 2025-08-14T21:55:15.4958216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4958562Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4958937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:55:15.4959309Z layer_outputs = decoder_layer( 2025-08-14T21:55:15.4959656Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:15.4960009Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:15.4960398Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-08-14T21:55:15.4960828Z hidden_states = self.activation_fn(hidden_states) 2025-08-14T21:55:15.4960987Z 2025-08-14T21:55:15.4961080Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4961295Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4961518Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4961732Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4961941Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4962161Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4962378Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4962589Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4962835Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:15.4963226Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:15.4963583Z return mod(**inputs) 2025-08-14T21:55:15.4963928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4964305Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4964700Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:55:15.4965107Z outputs = self.model.decoder( 2025-08-14T21:55:15.4965476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4965847Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4966254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:55:15.4966655Z layer_outputs = decoder_layer( 2025-08-14T21:55:15.4967031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:15.4967411Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:15.4967806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:55:15.4968243Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:55:15.4968694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:55:15.4969119Z attn_output, attn_weights = attention_interface( 2025-08-14T21:55:15.4969587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:55:15.4970092Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:55:15.4970271Z 2025-08-14T21:55:15.4970382Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:15.4970734Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:15.4971064Z return mod(**inputs) 2025-08-14T21:55:15.4971403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4971750Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4972109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:55:15.4972487Z outputs = self.model.decoder( 2025-08-14T21:55:15.4972826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4973172Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4973528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:55:15.4973895Z layer_outputs = decoder_layer( 2025-08-14T21:55:15.4974234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:15.4974582Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:15.4974950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:55:15.4975341Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:55:15.4975727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:55:15.4976106Z attn_output, attn_weights = attention_interface( 2025-08-14T21:55:15.4976536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:55:15.4976983Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:55:15.4977139Z 2025-08-14T21:55:15.4977225Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4977429Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4977663Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:15.4978016Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:15.4978328Z return mod(**inputs) 2025-08-14T21:55:15.4978649Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4978991Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4979357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:55:15.4979808Z outputs = self.model.decoder( 2025-08-14T21:55:15.4980182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4980561Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4980954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:55:15.4981354Z layer_outputs = decoder_layer( 2025-08-14T21:55:15.4981724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:15.4982082Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:15.4982477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-08-14T21:55:15.4982871Z hidden_states = self.activation_fn(hidden_states) 2025-08-14T21:55:15.4983024Z 2025-08-14T21:55:15.4983111Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4983311Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4983537Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4983740Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4983943Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4984137Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4984335Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4984552Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4984792Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:15.4985143Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:15.4985461Z return mod(**inputs) 2025-08-14T21:55:15.4985775Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4986119Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4986483Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:55:15.4986850Z outputs = self.model.decoder( 2025-08-14T21:55:15.4987179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4987519Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4987890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:55:15.4988245Z layer_outputs = decoder_layer( 2025-08-14T21:55:15.4988588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:15.4988938Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:15.4989305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:55:15.4989689Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:55:15.4990077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:55:15.4990469Z attn_output, attn_weights = attention_interface( 2025-08-14T21:55:15.4990896Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:55:15.4991363Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:55:15.4991551Z 2025-08-14T21:55:15.4991654Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:15.4992003Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:15.4992313Z return mod(**inputs) 2025-08-14T21:55:15.4992635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4992977Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4993342Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:55:15.4993705Z outputs = self.model.decoder( 2025-08-14T21:55:15.4994034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.4994367Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.4994712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:55:15.4995070Z layer_outputs = decoder_layer( 2025-08-14T21:55:15.4995395Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:15.4995770Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:15.4996126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:55:15.4996507Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:55:15.4996902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:55:15.4997279Z attn_output, attn_weights = attention_interface( 2025-08-14T21:55:15.4997692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:55:15.4998168Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:55:15.4998325Z 2025-08-14T21:55:15.4998411Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4998605Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.4998830Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:15.4999178Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:15.4999487Z return mod(**inputs) 2025-08-14T21:55:15.4999794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.5000138Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.5000493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:55:15.5000844Z outputs = self.model.decoder( 2025-08-14T21:55:15.5001178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.5001519Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.5001873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:55:15.5002221Z layer_outputs = decoder_layer( 2025-08-14T21:55:15.5002551Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:15.5002891Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:15.5003245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-08-14T21:55:15.5003625Z hidden_states = self.activation_fn(hidden_states) 2025-08-14T21:55:15.5003779Z 2025-08-14T21:55:15.5003858Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.5004063Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.5004261Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.5004466Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.5004675Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.5004862Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.5005058Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.5005257Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.5005472Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:15.5005827Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:15.5006148Z return mod(**inputs) 2025-08-14T21:55:15.5006468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.5006809Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.5007172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:55:15.5007544Z outputs = self.model.decoder( 2025-08-14T21:55:15.5007875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.5008217Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.5008578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:55:15.5008972Z layer_outputs = decoder_layer( 2025-08-14T21:55:15.5009305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:15.5009657Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:15.5010048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:55:15.5010437Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:55:15.5010821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:55:15.5011250Z attn_output, attn_weights = attention_interface( 2025-08-14T21:55:15.5011682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:55:15.5012144Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:55:15.5012333Z 2025-08-14T21:55:15.5012434Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:15.5012786Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:15.5013106Z return mod(**inputs) 2025-08-14T21:55:15.5013424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.5013779Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.5014155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:55:15.5014526Z outputs = self.model.decoder( 2025-08-14T21:55:15.5014873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.5015230Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.5015607Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:55:15.5015974Z layer_outputs = decoder_layer( 2025-08-14T21:55:15.5016322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:15.5016687Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:15.5017069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:55:15.5017485Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:55:15.5017919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:55:15.5018344Z attn_output, attn_weights = attention_interface( 2025-08-14T21:55:15.5018802Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:55:15.5019289Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:55:15.5019541Z 2025-08-14T21:55:15.5019642Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.5019956Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.5020212Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:15.5020602Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:15.5020943Z return mod(**inputs) 2025-08-14T21:55:15.5021303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.5021662Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.5022048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:55:15.5022427Z outputs = self.model.decoder( 2025-08-14T21:55:15.5022772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.5023160Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.5023532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:55:15.5023912Z layer_outputs = decoder_layer( 2025-08-14T21:55:15.5024279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:15.5024628Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:15.5025004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-08-14T21:55:15.5025441Z hidden_states = self.activation_fn(hidden_states) 2025-08-14T21:55:15.5025596Z 2025-08-14T21:55:15.5025685Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.5025899Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.5041681Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.5042285Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.5042512Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.5042727Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.5042932Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.5043143Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.5043396Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:15.5043791Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:15.5044130Z return mod(**inputs) 2025-08-14T21:55:15.5044501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.5044881Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.5045264Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:55:15.5045663Z outputs = self.model.decoder( 2025-08-14T21:55:15.5046026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.5046391Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.5046767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:55:15.5047153Z layer_outputs = decoder_layer( 2025-08-14T21:55:15.5047514Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:15.5047873Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:15.5048259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:55:15.5048674Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:55:15.5049081Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:55:15.5049486Z attn_output, attn_weights = attention_interface( 2025-08-14T21:55:15.5049937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:55:15.5050427Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:55:15.5050619Z 2025-08-14T21:55:15.5050738Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:15.5051104Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:15.5051438Z return mod(**inputs) 2025-08-14T21:55:15.5051771Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.5052125Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.5052503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:55:15.5052890Z outputs = self.model.decoder( 2025-08-14T21:55:15.5053404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.5053763Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.5054146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:55:15.5054579Z layer_outputs = decoder_layer( 2025-08-14T21:55:15.5054927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:15.5055298Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:15.5055760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:55:15.5056166Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:55:15.5056563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:55:15.5056966Z attn_output, attn_weights = attention_interface( 2025-08-14T21:55:15.5057415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:55:15.5057879Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:55:15.5058046Z 2025-08-14T21:55:15.5058129Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.5058347Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.5058589Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:15.5058952Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:15.5059291Z return mod(**inputs) 2025-08-14T21:55:15.5059727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.5060125Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.5060543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:55:15.5060963Z outputs = self.model.decoder( 2025-08-14T21:55:15.5061336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.5061704Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.5062112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:55:15.5062512Z layer_outputs = decoder_layer( 2025-08-14T21:55:15.5062889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:15.5063270Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:15.5063676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-08-14T21:55:15.5064104Z hidden_states = self.activation_fn(hidden_states) 2025-08-14T21:55:15.5064260Z 2025-08-14T21:55:15.5064340Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.5064553Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.5064762Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.5064968Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.5065165Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.5065372Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.5065581Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.5065778Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.5066048Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:15.5066413Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:15.5066738Z return mod(**inputs) 2025-08-14T21:55:15.5067070Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.5067450Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.5067830Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:55:15.5068202Z outputs = self.model.decoder( 2025-08-14T21:55:15.5068571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.5068922Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.5069286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:55:15.5069685Z layer_outputs = decoder_layer( 2025-08-14T21:55:15.5070058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:15.5070422Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:15.5070801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:55:15.5071204Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:55:15.5071610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:55:15.5072013Z attn_output, attn_weights = attention_interface( 2025-08-14T21:55:15.5072454Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:55:15.5072963Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:55:15.5073164Z 2025-08-14T21:55:15.5073292Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:15.5073648Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:15.5073980Z return mod(**inputs) 2025-08-14T21:55:15.5074316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.5074673Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.5075045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:55:15.5075428Z outputs = self.model.decoder( 2025-08-14T21:55:15.5075779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.5076124Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.5076497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:55:15.5076878Z layer_outputs = decoder_layer( 2025-08-14T21:55:15.5077233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:15.5077589Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:15.5077975Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:55:15.5078382Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:55:15.5078784Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:55:15.5079181Z attn_output, attn_weights = attention_interface( 2025-08-14T21:55:15.5079626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:55:15.5080087Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:55:15.5080252Z 2025-08-14T21:55:15.5080335Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.5080552Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.5080789Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:15.5081155Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:15.5081498Z return mod(**inputs) 2025-08-14T21:55:15.5081837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.5082196Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.5082585Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:55:15.5082966Z outputs = self.model.decoder( 2025-08-14T21:55:15.5083315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.5083693Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.5084070Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:55:15.5084444Z layer_outputs = decoder_layer( 2025-08-14T21:55:15.5084791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:15.5085135Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:15.5085507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-08-14T21:55:15.5085899Z hidden_states = self.activation_fn(hidden_states) 2025-08-14T21:55:15.5086048Z 2025-08-14T21:55:15.5086133Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.5086337Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.5086547Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.5086752Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.5086950Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.5087154Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.5087354Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.5087554Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.5087778Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:15.5088138Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:15.5088462Z return mod(**inputs) 2025-08-14T21:55:15.5088777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.5089127Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.5089493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:55:15.5089935Z outputs = self.model.decoder( 2025-08-14T21:55:15.5090267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.5090607Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.5090970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:55:15.5091371Z layer_outputs = decoder_layer( 2025-08-14T21:55:15.5091720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:15.5092076Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:15.5092446Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:55:15.5092836Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:55:15.5093228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:55:15.5093620Z attn_output, attn_weights = attention_interface( 2025-08-14T21:55:15.5094051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:55:15.5094526Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:55:15.5094713Z 2025-08-14T21:55:15.5094841Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:15.5095201Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:15.5095522Z return mod(**inputs) 2025-08-14T21:55:15.5095854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.5096219Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.5096591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:55:15.5096953Z outputs = self.model.decoder( 2025-08-14T21:55:15.5097341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.5097688Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.5098051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:55:15.5098433Z layer_outputs = decoder_layer( 2025-08-14T21:55:15.5098789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:15.5099155Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:15.5099616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-08-14T21:55:15.5100050Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:55:15.5100481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-08-14T21:55:15.5100916Z attn_output, attn_weights = attention_interface( 2025-08-14T21:55:15.5101353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:55:15.5101816Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:55:15.5101979Z 2025-08-14T21:55:15.5102073Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.5102282Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.5102522Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:15.5102888Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:15.5103215Z return mod(**inputs) 2025-08-14T21:55:15.5103537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.5103891Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.5104270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-08-14T21:55:15.5104640Z outputs = self.model.decoder( 2025-08-14T21:55:15.5104991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.5105350Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.5105732Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-08-14T21:55:15.5106103Z layer_outputs = decoder_layer( 2025-08-14T21:55:15.5106456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:15.5106821Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:15.5107193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-08-14T21:55:15.5107594Z hidden_states = self.activation_fn(hidden_states) 2025-08-14T21:55:15.5107756Z 2025-08-14T21:55:15.5107839Z cudagraph partition due to non gpu ops 2025-08-14T21:55:15.5108075Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:15.5108433Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:15.5108762Z return mod(**inputs) 2025-08-14T21:55:15.5109111Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.5109464Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.5109839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 841, in forward 2025-08-14T21:55:15.5110257Z logits = self.lm_head(outputs[0]).contiguous() 2025-08-14T21:55:15.5110407Z 2025-08-14T21:55:15.5110516Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:15.5110867Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:15.5111211Z return mod(**inputs) 2025-08-14T21:55:15.5111526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-08-14T21:55:15.5111866Z output = func(self, *args, **kwargs) 2025-08-14T21:55:15.5112221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 847, in forward 2025-08-14T21:55:15.5112585Z loss = self.loss_function( 2025-08-14T21:55:15.5112941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/loss/loss_utils.py", line 67, in ForCausalLMLoss 2025-08-14T21:55:15.5113397Z loss = fixed_cross_entropy(logits, shift_labels, num_items_in_batch, ignore_index, **kwargs) 2025-08-14T21:55:15.5113866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/loss/loss_utils.py", line 36, in fixed_cross_entropy 2025-08-14T21:55:15.5114350Z loss = nn.functional.cross_entropy(source, target, ignore_index=ignore_index, reduction=reduction) 2025-08-14T21:55:15.5114597Z 2025-08-14T21:55:26.7078766Z Compilation time (from dynamo_timed): 18.615797808 2025-08-14T21:55:26.7604722Z pass 2025-08-14T21:55:26.7610388Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:55:26.7611286Z TIMING: _recursive_pre_grad_passes:0.04167 _recursive_joint_graph_passes:0.66249 _recursive_post_grad_passes:0.09279 async_compile.wait:0.87662 code_gen:10.07094 inductor_compile:11.83524 backend_compile:16.23117 gc:0.00018 entire_frame_compile:18.6158 total_wall_time:18.6158 2025-08-14T21:55:26.7612232Z STATS: call_* op count: 415 | FakeTensorMode.__torch_dispatch__:23751 | FakeTensor.__torch_dispatch__:3685 | ProxyTorchDispatchMode.__torch_dispatch__:5527 2025-08-14T21:55:26.7612735Z Dynamo produced 1 graphs covering 415 ops with 0 graph breaks (0 unique) 2025-08-14T21:55:32.4943029Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:55:32.4944119Z from pkg_resources import resource_filename 2025-08-14T21:55:33.2188254Z 2025-08-14T21:55:34.6800771Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:55:34.6801273Z loading model: 0it [00:01, ?it/s] 2025-08-14T21:55:34.6812034Z cpu eval PLBartForCausalLM 2025-08-14T21:55:35.4012685Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:55:35.7043183Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:55:36.0129414Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:55:41.9374324Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9380359Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9384949Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9389644Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9394572Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9398773Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9403702Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9407701Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9412672Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9414690Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9414929Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9415389Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9415606Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9415876Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:41.9416309Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:41.9416786Z return mod(**inputs) 2025-08-14T21:55:41.9417285Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-08-14T21:55:41.9417731Z outputs = self.model.decoder( 2025-08-14T21:55:41.9418180Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:55:41.9418605Z layer_outputs = decoder_layer( 2025-08-14T21:55:41.9418987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:41.9419389Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:41.9419983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:55:41.9420434Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:55:41.9420901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:55:41.9421377Z attn_output, attn_weights = attention_interface( 2025-08-14T21:55:41.9421862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:55:41.9422369Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:55:41.9422580Z 2025-08-14T21:55:41.9422697Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:41.9423089Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:41.9423437Z return mod(**inputs) 2025-08-14T21:55:41.9423842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-08-14T21:55:41.9424267Z outputs = self.model.decoder( 2025-08-14T21:55:41.9424683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:55:41.9425100Z layer_outputs = decoder_layer( 2025-08-14T21:55:41.9425476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:41.9425862Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:41.9426277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:55:41.9426724Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:55:41.9427169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:55:41.9427615Z attn_output, attn_weights = attention_interface( 2025-08-14T21:55:41.9428086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:55:41.9428576Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:55:41.9428760Z 2025-08-14T21:55:41.9428847Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9429076Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9429325Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:41.9429751Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:41.9430103Z return mod(**inputs) 2025-08-14T21:55:41.9430498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-08-14T21:55:41.9430920Z outputs = self.model.decoder( 2025-08-14T21:55:41.9431313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:55:41.9431706Z layer_outputs = decoder_layer( 2025-08-14T21:55:41.9432050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:41.9432467Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:41.9432870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 792, in forward 2025-08-14T21:55:41.9433304Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:55:41.9433710Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:55:41.9434079Z return self.act(input) 2025-08-14T21:55:41.9434205Z 2025-08-14T21:55:41.9434294Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9434501Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9434712Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9434918Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9435116Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9435321Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9435527Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9435727Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9435962Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:41.9436323Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:41.9436650Z return mod(**inputs) 2025-08-14T21:55:41.9437022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-08-14T21:55:41.9437430Z outputs = self.model.decoder( 2025-08-14T21:55:41.9437839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:55:41.9438251Z layer_outputs = decoder_layer( 2025-08-14T21:55:41.9438618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:41.9439001Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:41.9439424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:55:41.9439859Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:55:41.9440304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:55:41.9440750Z attn_output, attn_weights = attention_interface( 2025-08-14T21:55:41.9441219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:55:41.9441725Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:55:41.9442153Z 2025-08-14T21:55:41.9442269Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:41.9442659Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:41.9443002Z return mod(**inputs) 2025-08-14T21:55:41.9443411Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-08-14T21:55:41.9443846Z outputs = self.model.decoder( 2025-08-14T21:55:41.9444325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:55:41.9444746Z layer_outputs = decoder_layer( 2025-08-14T21:55:41.9445122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:41.9445499Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:41.9445956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:55:41.9446415Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:55:41.9446851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:55:41.9447374Z attn_output, attn_weights = attention_interface( 2025-08-14T21:55:41.9447845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:55:41.9448332Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:55:41.9448505Z 2025-08-14T21:55:41.9448591Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9448815Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9449064Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:41.9449443Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:41.9449791Z return mod(**inputs) 2025-08-14T21:55:41.9450185Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-08-14T21:55:41.9450607Z outputs = self.model.decoder( 2025-08-14T21:55:41.9451015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:55:41.9451434Z layer_outputs = decoder_layer( 2025-08-14T21:55:41.9451805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:41.9452189Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:41.9452603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 792, in forward 2025-08-14T21:55:41.9453070Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:55:41.9453484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:55:41.9453842Z return self.act(input) 2025-08-14T21:55:41.9453965Z 2025-08-14T21:55:41.9454050Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9454279Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9454503Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9454714Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9454984Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9455206Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9455439Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9455661Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9455916Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:41.9456305Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:41.9456666Z return mod(**inputs) 2025-08-14T21:55:41.9457077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-08-14T21:55:41.9457517Z outputs = self.model.decoder( 2025-08-14T21:55:41.9457939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:55:41.9458359Z layer_outputs = decoder_layer( 2025-08-14T21:55:41.9458725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:41.9459100Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:41.9459626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:55:41.9460100Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:55:41.9460571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:55:41.9461054Z attn_output, attn_weights = attention_interface( 2025-08-14T21:55:41.9461527Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:55:41.9462077Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:55:41.9462279Z 2025-08-14T21:55:41.9462402Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:41.9462783Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:41.9463132Z return mod(**inputs) 2025-08-14T21:55:41.9463531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-08-14T21:55:41.9463950Z outputs = self.model.decoder( 2025-08-14T21:55:41.9464364Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:55:41.9464785Z layer_outputs = decoder_layer( 2025-08-14T21:55:41.9465156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:41.9465536Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:41.9465963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:55:41.9466414Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:55:41.9466852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:55:41.9467298Z attn_output, attn_weights = attention_interface( 2025-08-14T21:55:41.9467779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:55:41.9468266Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:55:41.9468439Z 2025-08-14T21:55:41.9468523Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9468752Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9469011Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:41.9469397Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:41.9469742Z return mod(**inputs) 2025-08-14T21:55:41.9470139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-08-14T21:55:41.9470569Z outputs = self.model.decoder( 2025-08-14T21:55:41.9470975Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:55:41.9471401Z layer_outputs = decoder_layer( 2025-08-14T21:55:41.9471777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:41.9472180Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:41.9472579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 792, in forward 2025-08-14T21:55:41.9473022Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:55:41.9473426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:55:41.9473766Z return self.act(input) 2025-08-14T21:55:41.9473889Z 2025-08-14T21:55:41.9473971Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9474187Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9474427Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9474633Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9474851Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9475071Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9475308Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9475528Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9475777Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:41.9476167Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:41.9476572Z return mod(**inputs) 2025-08-14T21:55:41.9477002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-08-14T21:55:41.9477428Z outputs = self.model.decoder( 2025-08-14T21:55:41.9477836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:55:41.9478259Z layer_outputs = decoder_layer( 2025-08-14T21:55:41.9478628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:41.9478991Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:41.9479399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:55:41.9479823Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:55:41.9480241Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:55:41.9480658Z attn_output, attn_weights = attention_interface( 2025-08-14T21:55:41.9481102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:55:41.9481585Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:55:41.9481768Z 2025-08-14T21:55:41.9481879Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:41.9482231Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:41.9482570Z return mod(**inputs) 2025-08-14T21:55:41.9482964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-08-14T21:55:41.9483383Z outputs = self.model.decoder( 2025-08-14T21:55:41.9483793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:55:41.9484212Z layer_outputs = decoder_layer( 2025-08-14T21:55:41.9484585Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:41.9484967Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:41.9485386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:55:41.9485836Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:55:41.9486276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:55:41.9486718Z attn_output, attn_weights = attention_interface( 2025-08-14T21:55:41.9487185Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:55:41.9487678Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:55:41.9487849Z 2025-08-14T21:55:41.9487936Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9488162Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9488419Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:41.9488831Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:41.9489175Z return mod(**inputs) 2025-08-14T21:55:41.9489571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-08-14T21:55:41.9490011Z outputs = self.model.decoder( 2025-08-14T21:55:41.9490417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:55:41.9490844Z layer_outputs = decoder_layer( 2025-08-14T21:55:41.9491218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:41.9491646Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:41.9492064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 792, in forward 2025-08-14T21:55:41.9492533Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:55:41.9492951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:55:41.9493318Z return self.act(input) 2025-08-14T21:55:41.9493438Z 2025-08-14T21:55:41.9493525Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9493759Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9493982Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9494198Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9494420Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9494642Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9494858Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9495082Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9495338Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:41.9495721Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:41.9496070Z return mod(**inputs) 2025-08-14T21:55:41.9496471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-08-14T21:55:41.9496895Z outputs = self.model.decoder( 2025-08-14T21:55:41.9497303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:55:41.9497739Z layer_outputs = decoder_layer( 2025-08-14T21:55:41.9498126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:41.9498530Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:41.9498984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:55:41.9499449Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:55:41.9500030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:55:41.9500508Z attn_output, attn_weights = attention_interface( 2025-08-14T21:55:41.9500988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:55:41.9501511Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:55:41.9501720Z 2025-08-14T21:55:41.9501842Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:41.9502229Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:41.9502578Z return mod(**inputs) 2025-08-14T21:55:41.9502978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-08-14T21:55:41.9503409Z outputs = self.model.decoder( 2025-08-14T21:55:41.9503847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:55:41.9504267Z layer_outputs = decoder_layer( 2025-08-14T21:55:41.9504644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:41.9505051Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:41.9505478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:55:41.9506074Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:55:41.9506567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:55:41.9507024Z attn_output, attn_weights = attention_interface( 2025-08-14T21:55:41.9507499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:55:41.9507991Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:55:41.9508162Z 2025-08-14T21:55:41.9508259Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9508478Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9508734Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:41.9509125Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:41.9509467Z return mod(**inputs) 2025-08-14T21:55:41.9509869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-08-14T21:55:41.9510309Z outputs = self.model.decoder( 2025-08-14T21:55:41.9510723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:55:41.9511140Z layer_outputs = decoder_layer( 2025-08-14T21:55:41.9511519Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:41.9511906Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:41.9512318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 792, in forward 2025-08-14T21:55:41.9512790Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:55:41.9513209Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:55:41.9513575Z return self.act(input) 2025-08-14T21:55:41.9513693Z 2025-08-14T21:55:41.9513778Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9514011Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9514234Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9514446Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9514675Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9514882Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9515082Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9515288Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9515529Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:41.9515901Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:41.9516224Z return mod(**inputs) 2025-08-14T21:55:41.9516603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-08-14T21:55:41.9517002Z outputs = self.model.decoder( 2025-08-14T21:55:41.9517462Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:55:41.9517862Z layer_outputs = decoder_layer( 2025-08-14T21:55:41.9518215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:41.9518631Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:41.9519075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:55:41.9519500Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:55:41.9519932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:55:41.9520392Z attn_output, attn_weights = attention_interface( 2025-08-14T21:55:41.9520842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:55:41.9521370Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:55:41.9521555Z 2025-08-14T21:55:41.9521675Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:41.9522036Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:41.9522370Z return mod(**inputs) 2025-08-14T21:55:41.9522751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-08-14T21:55:41.9523152Z outputs = self.model.decoder( 2025-08-14T21:55:41.9523537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:55:41.9523939Z layer_outputs = decoder_layer( 2025-08-14T21:55:41.9524299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:41.9524654Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:41.9525065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:55:41.9525484Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:55:41.9525906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:55:41.9526333Z attn_output, attn_weights = attention_interface( 2025-08-14T21:55:41.9526783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:55:41.9527252Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:55:41.9527416Z 2025-08-14T21:55:41.9527506Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9527715Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9527960Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:41.9528343Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:41.9528672Z return mod(**inputs) 2025-08-14T21:55:41.9529061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-08-14T21:55:41.9529468Z outputs = self.model.decoder( 2025-08-14T21:55:41.9529864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:55:41.9530258Z layer_outputs = decoder_layer( 2025-08-14T21:55:41.9530611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:55:41.9530982Z return super().__call__(*args, **kwargs) 2025-08-14T21:55:41.9531375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 792, in forward 2025-08-14T21:55:41.9531828Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:55:41.9532222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:55:41.9532566Z return self.act(input) 2025-08-14T21:55:41.9532678Z 2025-08-14T21:55:41.9532755Z cudagraph partition due to non gpu ops 2025-08-14T21:55:41.9533022Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:41.9533387Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:41.9533703Z return mod(**inputs) 2025-08-14T21:55:41.9534086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1694, in forward 2025-08-14T21:55:41.9534519Z logits = self.lm_head(outputs[0]) 2025-08-14T21:55:41.9534649Z 2025-08-14T21:55:41.9534764Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:55:41.9535122Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:55:41.9535491Z return mod(**inputs) 2025-08-14T21:55:41.9535863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1700, in forward 2025-08-14T21:55:41.9536333Z loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1)) 2025-08-14T21:55:41.9536529Z 2025-08-14T21:55:50.6700972Z Compilation time (from dynamo_timed): 13.087726152 2025-08-14T21:55:50.7005669Z pass 2025-08-14T21:55:50.7006121Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:55:50.7007060Z TIMING: _recursive_pre_grad_passes:0.02148 _recursive_joint_graph_passes:0.25339 _recursive_post_grad_passes:0.05244 async_compile.wait:0.84029 code_gen:8.11231 inductor_compile:9.3088 backend_compile:11.77273 gc:0.00165 entire_frame_compile:13.08773 total_wall_time:13.08773 2025-08-14T21:55:50.7008083Z STATS: call_* op count: 198 | FakeTensorMode.__torch_dispatch__:13155 | FakeTensor.__torch_dispatch__:2127 | ProxyTorchDispatchMode.__torch_dispatch__:2975 2025-08-14T21:55:50.7008657Z Dynamo produced 1 graphs covering 198 ops with 0 graph breaks (0 unique) 2025-08-14T21:55:56.3477003Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:55:56.3478037Z from pkg_resources import resource_filename 2025-08-14T21:55:57.0946461Z 2025-08-14T21:55:59.6125648Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:55:59.6126726Z loading model: 0it [00:02, ?it/s] 2025-08-14T21:55:59.6133602Z cpu eval PLBartForConditionalGeneration 2025-08-14T21:56:00.8514235Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:56:01.4870872Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:56:02.1693267Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:56:14.1540562Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1544472Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1550855Z return mod(**inputs) 2025-08-14T21:56:14.1556254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1357, in forward 2025-08-14T21:56:14.1561615Z decoder_input_ids = shift_tokens_right(labels, self.config.pad_token_id) 2025-08-14T21:56:14.1562307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1084, in shift_tokens_right 2025-08-14T21:56:14.1562898Z index_of_eos = (prev_output_tokens.ne(pad_token_id).sum(dim=1) - 1).unsqueeze(-1) 2025-08-14T21:56:14.1563165Z 2025-08-14T21:56:14.1563271Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1563518Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1563750Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1563966Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1564191Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1564745Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1565021Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1565258Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1565495Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1565804Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1566026Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1566251Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1566475Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1566732Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1567261Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1567638Z return mod(**inputs) 2025-08-14T21:56:14.1568057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1568509Z outputs = self.model( 2025-08-14T21:56:14.1568934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-08-14T21:56:14.1569375Z encoder_outputs = self.encoder( 2025-08-14T21:56:14.1569802Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-08-14T21:56:14.1570242Z layer_outputs = encoder_layer( 2025-08-14T21:56:14.1570632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1571030Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1571487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 496, in forward 2025-08-14T21:56:14.1571946Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:56:14.1572399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:56:14.1572861Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:14.1573358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:56:14.1573895Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:56:14.1574100Z 2025-08-14T21:56:14.1574227Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1574621Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1574988Z return mod(**inputs) 2025-08-14T21:56:14.1575397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1575816Z outputs = self.model( 2025-08-14T21:56:14.1576222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-08-14T21:56:14.1576653Z encoder_outputs = self.encoder( 2025-08-14T21:56:14.1577075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-08-14T21:56:14.1577493Z layer_outputs = encoder_layer( 2025-08-14T21:56:14.1577875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1578279Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1578709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 496, in forward 2025-08-14T21:56:14.1579151Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:56:14.1579772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:56:14.1580237Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:14.1580763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:56:14.1581275Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:56:14.1581468Z 2025-08-14T21:56:14.1581591Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1581826Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1582080Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1582479Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1582848Z return mod(**inputs) 2025-08-14T21:56:14.1583303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1583730Z outputs = self.model( 2025-08-14T21:56:14.1584135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-08-14T21:56:14.1584570Z encoder_outputs = self.encoder( 2025-08-14T21:56:14.1585003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-08-14T21:56:14.1585605Z layer_outputs = encoder_layer( 2025-08-14T21:56:14.1585996Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1586404Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1586831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 507, in forward 2025-08-14T21:56:14.1587327Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:56:14.1587745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:56:14.1588101Z return self.act(input) 2025-08-14T21:56:14.1588228Z 2025-08-14T21:56:14.1588314Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1588556Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1588789Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1589011Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1589238Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1589467Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1589688Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1589922Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1590174Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1590552Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1590905Z return mod(**inputs) 2025-08-14T21:56:14.1591302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1591715Z outputs = self.model( 2025-08-14T21:56:14.1592104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-08-14T21:56:14.1592521Z encoder_outputs = self.encoder( 2025-08-14T21:56:14.1592931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-08-14T21:56:14.1593343Z layer_outputs = encoder_layer( 2025-08-14T21:56:14.1593713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1594095Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1594519Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 496, in forward 2025-08-14T21:56:14.1594954Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:56:14.1595392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:56:14.1595865Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:14.1596340Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:56:14.1596863Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:56:14.1597065Z 2025-08-14T21:56:14.1597178Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1597565Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1597905Z return mod(**inputs) 2025-08-14T21:56:14.1598343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1598758Z outputs = self.model( 2025-08-14T21:56:14.1599148Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-08-14T21:56:14.1599563Z encoder_outputs = self.encoder( 2025-08-14T21:56:14.1599973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-08-14T21:56:14.1600400Z layer_outputs = encoder_layer( 2025-08-14T21:56:14.1600772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1601146Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1601566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 496, in forward 2025-08-14T21:56:14.1602008Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:56:14.1602436Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:56:14.1602874Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:14.1603346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:56:14.1603827Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:56:14.1604001Z 2025-08-14T21:56:14.1604088Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1604317Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1604567Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1604984Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1605326Z return mod(**inputs) 2025-08-14T21:56:14.1605721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1606136Z outputs = self.model( 2025-08-14T21:56:14.1606518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-08-14T21:56:14.1606942Z encoder_outputs = self.encoder( 2025-08-14T21:56:14.1607364Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-08-14T21:56:14.1607788Z layer_outputs = encoder_layer( 2025-08-14T21:56:14.1608153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1608537Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1608964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 507, in forward 2025-08-14T21:56:14.1609438Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:56:14.1609855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:56:14.1610222Z return self.act(input) 2025-08-14T21:56:14.1610338Z 2025-08-14T21:56:14.1610430Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1610673Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1610899Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1611130Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1611332Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1611590Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1611796Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1611995Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1612233Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1612597Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1612974Z return mod(**inputs) 2025-08-14T21:56:14.1613365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1613783Z outputs = self.model( 2025-08-14T21:56:14.1614182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-08-14T21:56:14.1614596Z encoder_outputs = self.encoder( 2025-08-14T21:56:14.1615023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-08-14T21:56:14.1615455Z layer_outputs = encoder_layer( 2025-08-14T21:56:14.1615825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1616203Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1616647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 496, in forward 2025-08-14T21:56:14.1617089Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:56:14.1617530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:56:14.1617986Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:14.1618469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:56:14.1618997Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:56:14.1619197Z 2025-08-14T21:56:14.1619314Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1619820Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1620183Z return mod(**inputs) 2025-08-14T21:56:14.1620599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1621023Z outputs = self.model( 2025-08-14T21:56:14.1621418Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-08-14T21:56:14.1621842Z encoder_outputs = self.encoder( 2025-08-14T21:56:14.1622249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-08-14T21:56:14.1622670Z layer_outputs = encoder_layer( 2025-08-14T21:56:14.1623044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1623439Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1623857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 496, in forward 2025-08-14T21:56:14.1624301Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:56:14.1624735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:56:14.1625179Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:14.1625675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:56:14.1626165Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:56:14.1626344Z 2025-08-14T21:56:14.1626441Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1626685Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1626945Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1627342Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1627699Z return mod(**inputs) 2025-08-14T21:56:14.1628115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1628553Z outputs = self.model( 2025-08-14T21:56:14.1628947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-08-14T21:56:14.1629357Z encoder_outputs = self.encoder( 2025-08-14T21:56:14.1629775Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-08-14T21:56:14.1630191Z layer_outputs = encoder_layer( 2025-08-14T21:56:14.1630562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1630947Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1631367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 507, in forward 2025-08-14T21:56:14.1631838Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:56:14.1632253Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:56:14.1632608Z return self.act(input) 2025-08-14T21:56:14.1632729Z 2025-08-14T21:56:14.1632815Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1633039Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1633256Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1633476Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1633695Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1633908Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1634128Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1634347Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1634592Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1634967Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1635320Z return mod(**inputs) 2025-08-14T21:56:14.1635716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1636123Z outputs = self.model( 2025-08-14T21:56:14.1636520Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-08-14T21:56:14.1636941Z encoder_outputs = self.encoder( 2025-08-14T21:56:14.1637353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-08-14T21:56:14.1637766Z layer_outputs = encoder_layer( 2025-08-14T21:56:14.1638205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1638658Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1639079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 496, in forward 2025-08-14T21:56:14.1639538Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:56:14.1639975Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:56:14.1640431Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:14.1640940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:56:14.1641457Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:56:14.1641676Z 2025-08-14T21:56:14.1642163Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1642572Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1642916Z return mod(**inputs) 2025-08-14T21:56:14.1643312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1643838Z outputs = self.model( 2025-08-14T21:56:14.1644229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-08-14T21:56:14.1644663Z encoder_outputs = self.encoder( 2025-08-14T21:56:14.1645091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-08-14T21:56:14.1645526Z layer_outputs = encoder_layer( 2025-08-14T21:56:14.1645894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1646289Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1646689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 496, in forward 2025-08-14T21:56:14.1647099Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:56:14.1647514Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:56:14.1647937Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:14.1648387Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:56:14.1648853Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:56:14.1649027Z 2025-08-14T21:56:14.1649108Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1649321Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1649559Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1649916Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1650242Z return mod(**inputs) 2025-08-14T21:56:14.1650613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1651005Z outputs = self.model( 2025-08-14T21:56:14.1651381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-08-14T21:56:14.1651778Z encoder_outputs = self.encoder( 2025-08-14T21:56:14.1652195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-08-14T21:56:14.1652610Z layer_outputs = encoder_layer( 2025-08-14T21:56:14.1652984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1653374Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1653784Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 507, in forward 2025-08-14T21:56:14.1654253Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:56:14.1654666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:56:14.1655028Z return self.act(input) 2025-08-14T21:56:14.1655149Z 2025-08-14T21:56:14.1655240Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1655464Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1655726Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1655941Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1656165Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1656386Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1656599Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1656857Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1657109Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1657495Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1657834Z return mod(**inputs) 2025-08-14T21:56:14.1658283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1658706Z outputs = self.model( 2025-08-14T21:56:14.1659100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-08-14T21:56:14.1659607Z encoder_outputs = self.encoder( 2025-08-14T21:56:14.1660035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-08-14T21:56:14.1660464Z layer_outputs = encoder_layer( 2025-08-14T21:56:14.1660842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1661228Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1661650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 496, in forward 2025-08-14T21:56:14.1662088Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:56:14.1662518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:56:14.1662966Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:14.1663449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:56:14.1663953Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:56:14.1664161Z 2025-08-14T21:56:14.1664273Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1664664Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1665009Z return mod(**inputs) 2025-08-14T21:56:14.1665391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1665810Z outputs = self.model( 2025-08-14T21:56:14.1666208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-08-14T21:56:14.1666625Z encoder_outputs = self.encoder( 2025-08-14T21:56:14.1667040Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-08-14T21:56:14.1667458Z layer_outputs = encoder_layer( 2025-08-14T21:56:14.1667830Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1668233Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1668667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 496, in forward 2025-08-14T21:56:14.1669118Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:56:14.1669564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:56:14.1670015Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:14.1670494Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:56:14.1671046Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:56:14.1671216Z 2025-08-14T21:56:14.1671296Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1671514Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1671755Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1672137Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1672463Z return mod(**inputs) 2025-08-14T21:56:14.1672837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1673258Z outputs = self.model( 2025-08-14T21:56:14.1673650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-08-14T21:56:14.1674052Z encoder_outputs = self.encoder( 2025-08-14T21:56:14.1674445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-08-14T21:56:14.1674846Z layer_outputs = encoder_layer( 2025-08-14T21:56:14.1675194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1675567Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1675969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 507, in forward 2025-08-14T21:56:14.1676429Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:56:14.1676820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:56:14.1677171Z return self.act(input) 2025-08-14T21:56:14.1677281Z 2025-08-14T21:56:14.1677368Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1677573Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1677782Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1677993Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1678195Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1678403Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1678613Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1678823Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1679055Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1679419Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1679748Z return mod(**inputs) 2025-08-14T21:56:14.1680117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1680515Z outputs = self.model( 2025-08-14T21:56:14.1680887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-08-14T21:56:14.1681290Z encoder_outputs = self.encoder( 2025-08-14T21:56:14.1681674Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-08-14T21:56:14.1682069Z layer_outputs = encoder_layer( 2025-08-14T21:56:14.1682423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1682786Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1683183Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 496, in forward 2025-08-14T21:56:14.1683600Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:56:14.1684009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:56:14.1684421Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:14.1684895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:56:14.1685385Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:56:14.1685574Z 2025-08-14T21:56:14.1685689Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1686070Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1686407Z return mod(**inputs) 2025-08-14T21:56:14.1686781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1687170Z outputs = self.model( 2025-08-14T21:56:14.1687580Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-08-14T21:56:14.1687982Z encoder_outputs = self.encoder( 2025-08-14T21:56:14.1688365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-08-14T21:56:14.1688745Z layer_outputs = encoder_layer( 2025-08-14T21:56:14.1689089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1689452Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1689838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 496, in forward 2025-08-14T21:56:14.1690241Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:56:14.1690645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:56:14.1691058Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:14.1691492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:56:14.1691944Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:56:14.1692109Z 2025-08-14T21:56:14.1692195Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1692403Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1692626Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1692981Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1693299Z return mod(**inputs) 2025-08-14T21:56:14.1693658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1694045Z outputs = self.model( 2025-08-14T21:56:14.1694414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-08-14T21:56:14.1694807Z encoder_outputs = self.encoder( 2025-08-14T21:56:14.1695182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-08-14T21:56:14.1695572Z layer_outputs = encoder_layer( 2025-08-14T21:56:14.1695914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1696264Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1696658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 507, in forward 2025-08-14T21:56:14.1697093Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:56:14.1697485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:56:14.1697825Z return self.act(input) 2025-08-14T21:56:14.1697942Z 2025-08-14T21:56:14.1698021Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1698236Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1698438Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1698645Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1698880Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1699092Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1699298Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1699616Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1699902Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1700285Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1700635Z return mod(**inputs) 2025-08-14T21:56:14.1701043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1701475Z outputs = self.model( 2025-08-14T21:56:14.1701842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:56:14.1702230Z decoder_outputs = self.decoder( 2025-08-14T21:56:14.1702623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:56:14.1703022Z layer_outputs = decoder_layer( 2025-08-14T21:56:14.1703364Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1703721Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1704101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:56:14.1704513Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:56:14.1704922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:56:14.1705326Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:14.1705754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:56:14.1706228Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:56:14.1706415Z 2025-08-14T21:56:14.1706518Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1706869Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1707184Z return mod(**inputs) 2025-08-14T21:56:14.1707543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1707922Z outputs = self.model( 2025-08-14T21:56:14.1708274Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:56:14.1708667Z decoder_outputs = self.decoder( 2025-08-14T21:56:14.1709044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:56:14.1709444Z layer_outputs = decoder_layer( 2025-08-14T21:56:14.1709778Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1710129Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1710516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:56:14.1710928Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:56:14.1711325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:56:14.1711738Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:14.1712170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:56:14.1712606Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:56:14.1712773Z 2025-08-14T21:56:14.1712886Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1713101Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1713307Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1713507Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1713733Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1713935Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1714127Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1714330Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1714566Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1714959Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1715280Z return mod(**inputs) 2025-08-14T21:56:14.1715639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1716023Z outputs = self.model( 2025-08-14T21:56:14.1716377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:56:14.1716763Z decoder_outputs = self.decoder( 2025-08-14T21:56:14.1717145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:56:14.1717530Z layer_outputs = decoder_layer( 2025-08-14T21:56:14.1717869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1718218Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1718611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 777, in forward 2025-08-14T21:56:14.1719027Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:56:14.1719444Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:56:14.1719857Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:14.1720299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:56:14.1720770Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:56:14.1720970Z 2025-08-14T21:56:14.1721072Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1721424Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1721745Z return mod(**inputs) 2025-08-14T21:56:14.1722122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1722517Z outputs = self.model( 2025-08-14T21:56:14.1722888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:56:14.1723349Z decoder_outputs = self.decoder( 2025-08-14T21:56:14.1723741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:56:14.1724137Z layer_outputs = decoder_layer( 2025-08-14T21:56:14.1724489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1724845Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1725243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 777, in forward 2025-08-14T21:56:14.1725673Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:56:14.1726094Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:56:14.1726513Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:14.1726980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:56:14.1727435Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:56:14.1727617Z 2025-08-14T21:56:14.1727697Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1727908Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1728146Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1728504Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1728834Z return mod(**inputs) 2025-08-14T21:56:14.1729290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1729685Z outputs = self.model( 2025-08-14T21:56:14.1730052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:56:14.1730454Z decoder_outputs = self.decoder( 2025-08-14T21:56:14.1730846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:56:14.1731242Z layer_outputs = decoder_layer( 2025-08-14T21:56:14.1731593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1731958Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1732360Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 792, in forward 2025-08-14T21:56:14.1732795Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:56:14.1733182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:56:14.1733529Z return self.act(input) 2025-08-14T21:56:14.1733634Z 2025-08-14T21:56:14.1733721Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1733922Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1734126Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1734332Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1734536Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1734744Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1734949Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1735147Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1735381Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1735748Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1736078Z return mod(**inputs) 2025-08-14T21:56:14.1736446Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1736835Z outputs = self.model( 2025-08-14T21:56:14.1737211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:56:14.1737601Z decoder_outputs = self.decoder( 2025-08-14T21:56:14.1737992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:56:14.1738389Z layer_outputs = decoder_layer( 2025-08-14T21:56:14.1738741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1739098Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1739582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:56:14.1740036Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:56:14.1740485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:56:14.1740968Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:14.1741447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:56:14.1742191Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:56:14.1742379Z 2025-08-14T21:56:14.1742488Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1742856Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1743190Z return mod(**inputs) 2025-08-14T21:56:14.1743647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1744036Z outputs = self.model( 2025-08-14T21:56:14.1744408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:56:14.1744805Z decoder_outputs = self.decoder( 2025-08-14T21:56:14.1745187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:56:14.1745581Z layer_outputs = decoder_layer( 2025-08-14T21:56:14.1745945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1746333Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1746747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:56:14.1747202Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:56:14.1747616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:56:14.1748033Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:14.1748472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:56:14.1748929Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:56:14.1749092Z 2025-08-14T21:56:14.1749183Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1749387Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1749594Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1749801Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1750005Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1750204Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1750414Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1750622Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1750848Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1751212Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1751543Z return mod(**inputs) 2025-08-14T21:56:14.1751908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1752300Z outputs = self.model( 2025-08-14T21:56:14.1752673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:56:14.1753070Z decoder_outputs = self.decoder( 2025-08-14T21:56:14.1753451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:56:14.1753849Z layer_outputs = decoder_layer( 2025-08-14T21:56:14.1754206Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1754560Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1754995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 777, in forward 2025-08-14T21:56:14.1755436Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:56:14.1755865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:56:14.1756307Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:14.1756758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:56:14.1757247Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:56:14.1757451Z 2025-08-14T21:56:14.1757579Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1757938Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1758268Z return mod(**inputs) 2025-08-14T21:56:14.1758645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1759033Z outputs = self.model( 2025-08-14T21:56:14.1759400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:56:14.1759796Z decoder_outputs = self.decoder( 2025-08-14T21:56:14.1760182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:56:14.1760571Z layer_outputs = decoder_layer( 2025-08-14T21:56:14.1760927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1761296Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1761694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 777, in forward 2025-08-14T21:56:14.1762114Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:56:14.1762539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:56:14.1762957Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:14.1763396Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:56:14.1763910Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:56:14.1764079Z 2025-08-14T21:56:14.1764160Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1764375Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1764612Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1764977Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1765306Z return mod(**inputs) 2025-08-14T21:56:14.1765669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1766067Z outputs = self.model( 2025-08-14T21:56:14.1766439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:56:14.1766841Z decoder_outputs = self.decoder( 2025-08-14T21:56:14.1767221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:56:14.1767616Z layer_outputs = decoder_layer( 2025-08-14T21:56:14.1767967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1768327Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1768728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 792, in forward 2025-08-14T21:56:14.1769201Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:56:14.1769605Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:56:14.1769931Z return self.act(input) 2025-08-14T21:56:14.1770046Z 2025-08-14T21:56:14.1770124Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1770350Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1770547Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1770751Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1770955Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1771160Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1771387Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1771607Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1771840Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1772201Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1772529Z return mod(**inputs) 2025-08-14T21:56:14.1772903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1773296Z outputs = self.model( 2025-08-14T21:56:14.1773657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:56:14.1774055Z decoder_outputs = self.decoder( 2025-08-14T21:56:14.1774449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:56:14.1774833Z layer_outputs = decoder_layer( 2025-08-14T21:56:14.1775187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1775551Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1775945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:56:14.1776360Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:56:14.1776780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:56:14.1777198Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:14.1777633Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:56:14.1778113Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:56:14.1778302Z 2025-08-14T21:56:14.1778407Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1778768Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1779089Z return mod(**inputs) 2025-08-14T21:56:14.1779562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1779989Z outputs = self.model( 2025-08-14T21:56:14.1780381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:56:14.1780791Z decoder_outputs = self.decoder( 2025-08-14T21:56:14.1781204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:56:14.1781590Z layer_outputs = decoder_layer( 2025-08-14T21:56:14.1781927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1782288Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1782681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:56:14.1783094Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:56:14.1783519Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:56:14.1783925Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:14.1784359Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:56:14.1784823Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:56:14.1784981Z 2025-08-14T21:56:14.1785060Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1785266Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1785468Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1785709Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1785915Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1786121Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1786319Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1786525Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1786765Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1787123Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1787436Z return mod(**inputs) 2025-08-14T21:56:14.1787801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1788182Z outputs = self.model( 2025-08-14T21:56:14.1788537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:56:14.1788926Z decoder_outputs = self.decoder( 2025-08-14T21:56:14.1789313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:56:14.1789702Z layer_outputs = decoder_layer( 2025-08-14T21:56:14.1790041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1790399Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1790789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 777, in forward 2025-08-14T21:56:14.1791198Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:56:14.1791613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:56:14.1792021Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:14.1792455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:56:14.1792917Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:56:14.1793104Z 2025-08-14T21:56:14.1793207Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1793565Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1793885Z return mod(**inputs) 2025-08-14T21:56:14.1794240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1794623Z outputs = self.model( 2025-08-14T21:56:14.1794987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:56:14.1795374Z decoder_outputs = self.decoder( 2025-08-14T21:56:14.1795760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:56:14.1796149Z layer_outputs = decoder_layer( 2025-08-14T21:56:14.1796490Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1796839Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1797249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 777, in forward 2025-08-14T21:56:14.1797666Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:56:14.1798079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:56:14.1798506Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:14.1798950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:56:14.1799442Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:56:14.1799603Z 2025-08-14T21:56:14.1799684Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1799897Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1800134Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1800498Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1800826Z return mod(**inputs) 2025-08-14T21:56:14.1801196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1801588Z outputs = self.model( 2025-08-14T21:56:14.1801951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:56:14.1802348Z decoder_outputs = self.decoder( 2025-08-14T21:56:14.1802740Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:56:14.1803143Z layer_outputs = decoder_layer( 2025-08-14T21:56:14.1803492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1803859Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1804260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 792, in forward 2025-08-14T21:56:14.1804693Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:56:14.1805084Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:56:14.1805426Z return self.act(input) 2025-08-14T21:56:14.1805536Z 2025-08-14T21:56:14.1805624Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1805829Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1806040Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1806255Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1806457Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1806667Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1806876Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1807076Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1807314Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1807676Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1808018Z return mod(**inputs) 2025-08-14T21:56:14.1808383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1808798Z outputs = self.model( 2025-08-14T21:56:14.1809188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:56:14.1809607Z decoder_outputs = self.decoder( 2025-08-14T21:56:14.1810000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:56:14.1810398Z layer_outputs = decoder_layer( 2025-08-14T21:56:14.1810751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1811161Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1811559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:56:14.1811995Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:56:14.1812408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:56:14.1812815Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:14.1813277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:56:14.1813770Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:56:14.1813951Z 2025-08-14T21:56:14.1814067Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1814427Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1814754Z return mod(**inputs) 2025-08-14T21:56:14.1815122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1815510Z outputs = self.model( 2025-08-14T21:56:14.1815875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:56:14.1816267Z decoder_outputs = self.decoder( 2025-08-14T21:56:14.1816652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:56:14.1817041Z layer_outputs = decoder_layer( 2025-08-14T21:56:14.1817392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1817768Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1818190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:56:14.1818640Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:56:14.1819086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:56:14.1819619Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:14.1820098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:56:14.1820594Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:56:14.1820772Z 2025-08-14T21:56:14.1820869Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1821105Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1821320Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1821541Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1821761Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1821971Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1822188Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1822405Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1822648Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1823032Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1823381Z return mod(**inputs) 2025-08-14T21:56:14.1823760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1824174Z outputs = self.model( 2025-08-14T21:56:14.1824561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:56:14.1824980Z decoder_outputs = self.decoder( 2025-08-14T21:56:14.1825408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:56:14.1825827Z layer_outputs = decoder_layer( 2025-08-14T21:56:14.1826196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1826611Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1827022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 777, in forward 2025-08-14T21:56:14.1827478Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:56:14.1827947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:56:14.1828402Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:14.1828871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:56:14.1829381Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:56:14.1829574Z 2025-08-14T21:56:14.1829693Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1830070Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1830420Z return mod(**inputs) 2025-08-14T21:56:14.1830812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1831234Z outputs = self.model( 2025-08-14T21:56:14.1831601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:56:14.1832002Z decoder_outputs = self.decoder( 2025-08-14T21:56:14.1832389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:56:14.1832779Z layer_outputs = decoder_layer( 2025-08-14T21:56:14.1833129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1833494Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1833891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 777, in forward 2025-08-14T21:56:14.1834309Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:56:14.1834736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:56:14.1835154Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:14.1835598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:56:14.1836047Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:56:14.1836215Z 2025-08-14T21:56:14.1836297Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1836512Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1836746Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1837110Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1837444Z return mod(**inputs) 2025-08-14T21:56:14.1837815Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1838202Z outputs = self.model( 2025-08-14T21:56:14.1838572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:56:14.1838970Z decoder_outputs = self.decoder( 2025-08-14T21:56:14.1839350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:56:14.1839746Z layer_outputs = decoder_layer( 2025-08-14T21:56:14.1840125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1840490Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1840880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 792, in forward 2025-08-14T21:56:14.1841341Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:56:14.1841739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:56:14.1842286Z return self.act(input) 2025-08-14T21:56:14.1842469Z 2025-08-14T21:56:14.1842578Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1842793Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1843006Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1843209Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1843421Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1843635Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1843838Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1844048Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1844287Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1844648Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1844987Z return mod(**inputs) 2025-08-14T21:56:14.1845358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1845760Z outputs = self.model( 2025-08-14T21:56:14.1846117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:56:14.1846506Z decoder_outputs = self.decoder( 2025-08-14T21:56:14.1846883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:56:14.1847262Z layer_outputs = decoder_layer( 2025-08-14T21:56:14.1847604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1847961Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1848348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:56:14.1848751Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:56:14.1849195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:56:14.1849613Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:14.1850055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:56:14.1850511Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:56:14.1850697Z 2025-08-14T21:56:14.1850799Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1851151Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1851462Z return mod(**inputs) 2025-08-14T21:56:14.1851818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1852200Z outputs = self.model( 2025-08-14T21:56:14.1852564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:56:14.1852945Z decoder_outputs = self.decoder( 2025-08-14T21:56:14.1853327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:56:14.1853711Z layer_outputs = decoder_layer( 2025-08-14T21:56:14.1854087Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1854434Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1854825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:56:14.1855261Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:56:14.1855668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:56:14.1856072Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:14.1856546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:56:14.1856997Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:56:14.1857159Z 2025-08-14T21:56:14.1857239Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1857456Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1857668Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1857869Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1858078Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1858288Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1858496Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1858693Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1858927Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1859291Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1859741Z return mod(**inputs) 2025-08-14T21:56:14.1860138Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1860551Z outputs = self.model( 2025-08-14T21:56:14.1860941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:56:14.1861333Z decoder_outputs = self.decoder( 2025-08-14T21:56:14.1861725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:56:14.1862116Z layer_outputs = decoder_layer( 2025-08-14T21:56:14.1862454Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1862811Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1863203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 777, in forward 2025-08-14T21:56:14.1863627Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:56:14.1864038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:56:14.1864447Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:14.1864877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:56:14.1865333Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:56:14.1865518Z 2025-08-14T21:56:14.1865622Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1865973Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1866287Z return mod(**inputs) 2025-08-14T21:56:14.1866637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1867017Z outputs = self.model( 2025-08-14T21:56:14.1867379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:56:14.1867795Z decoder_outputs = self.decoder( 2025-08-14T21:56:14.1868168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:56:14.1868547Z layer_outputs = decoder_layer( 2025-08-14T21:56:14.1868915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1869273Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1869672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 777, in forward 2025-08-14T21:56:14.1870124Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:56:14.1870540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:56:14.1870940Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:14.1871373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:56:14.1871815Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:56:14.1871971Z 2025-08-14T21:56:14.1872057Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1872261Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1872493Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1872845Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1873156Z return mod(**inputs) 2025-08-14T21:56:14.1873515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1873898Z outputs = self.model( 2025-08-14T21:56:14.1874249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:56:14.1874638Z decoder_outputs = self.decoder( 2025-08-14T21:56:14.1875016Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:56:14.1875396Z layer_outputs = decoder_layer( 2025-08-14T21:56:14.1875730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1876084Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1876484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 792, in forward 2025-08-14T21:56:14.1876910Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:56:14.1877281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:56:14.1877613Z return self.act(input) 2025-08-14T21:56:14.1877717Z 2025-08-14T21:56:14.1877802Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1878003Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1878206Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1878408Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1878603Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1878807Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1879007Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1879205Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1879427Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1879781Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1880105Z return mod(**inputs) 2025-08-14T21:56:14.1880456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1880833Z outputs = self.model( 2025-08-14T21:56:14.1881234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:56:14.1881617Z decoder_outputs = self.decoder( 2025-08-14T21:56:14.1881986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:56:14.1882390Z layer_outputs = decoder_layer( 2025-08-14T21:56:14.1882732Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1883081Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1883488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:56:14.1883917Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:56:14.1884323Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:56:14.1884724Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:14.1885159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:56:14.1885626Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:56:14.1885803Z 2025-08-14T21:56:14.1885914Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1886259Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1886574Z return mod(**inputs) 2025-08-14T21:56:14.1886934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1887310Z outputs = self.model( 2025-08-14T21:56:14.1887676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:56:14.1888067Z decoder_outputs = self.decoder( 2025-08-14T21:56:14.1888448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:56:14.1888826Z layer_outputs = decoder_layer( 2025-08-14T21:56:14.1889168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1889529Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1889907Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-08-14T21:56:14.1890326Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:56:14.1890745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:56:14.1891163Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:14.1891599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:56:14.1892054Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:56:14.1892223Z 2025-08-14T21:56:14.1892304Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1892517Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1892717Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1892922Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1893128Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1893327Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1893533Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1893743Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1893971Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1894338Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1894667Z return mod(**inputs) 2025-08-14T21:56:14.1895061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1895451Z outputs = self.model( 2025-08-14T21:56:14.1895825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:56:14.1896244Z decoder_outputs = self.decoder( 2025-08-14T21:56:14.1896639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:56:14.1897055Z layer_outputs = decoder_layer( 2025-08-14T21:56:14.1897461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1897845Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1898267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 777, in forward 2025-08-14T21:56:14.1898730Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:56:14.1899173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:56:14.1899686Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:14.1900135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:56:14.1900621Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:56:14.1900805Z 2025-08-14T21:56:14.1900981Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1901350Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1901671Z return mod(**inputs) 2025-08-14T21:56:14.1902049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1902446Z outputs = self.model( 2025-08-14T21:56:14.1902814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:56:14.1903218Z decoder_outputs = self.decoder( 2025-08-14T21:56:14.1903620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:56:14.1904021Z layer_outputs = decoder_layer( 2025-08-14T21:56:14.1904369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1904735Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1905142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 777, in forward 2025-08-14T21:56:14.1905576Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:56:14.1906000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-08-14T21:56:14.1906418Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:14.1906865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:56:14.1907316Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:56:14.1907487Z 2025-08-14T21:56:14.1907568Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1907783Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1908023Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1908380Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1908709Z return mod(**inputs) 2025-08-14T21:56:14.1909082Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-08-14T21:56:14.1909500Z outputs = self.model( 2025-08-14T21:56:14.1909860Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-08-14T21:56:14.1910243Z decoder_outputs = self.decoder( 2025-08-14T21:56:14.1910641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-08-14T21:56:14.1911016Z layer_outputs = decoder_layer( 2025-08-14T21:56:14.1911357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:14.1911739Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:14.1912143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 792, in forward 2025-08-14T21:56:14.1912563Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:56:14.1912946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:56:14.1913286Z return self.act(input) 2025-08-14T21:56:14.1913393Z 2025-08-14T21:56:14.1913473Z cudagraph partition due to non gpu ops 2025-08-14T21:56:14.1913712Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1914064Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1914380Z return mod(**inputs) 2025-08-14T21:56:14.1914735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1377, in forward 2025-08-14T21:56:14.1915130Z lm_logits = self.lm_head(outputs[0]) 2025-08-14T21:56:14.1915261Z 2025-08-14T21:56:14.1915371Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:14.1915714Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:14.1916030Z return mod(**inputs) 2025-08-14T21:56:14.1916385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1383, in forward 2025-08-14T21:56:14.1916859Z masked_lm_loss = loss_fct(lm_logits.view(-1, self.config.vocab_size), labels.view(-1)) 2025-08-14T21:56:14.1917076Z 2025-08-14T21:56:24.8062254Z Compilation time (from dynamo_timed): 20.743972703 2025-08-14T21:56:24.8304714Z pass 2025-08-14T21:56:24.8305339Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:56:24.8306302Z TIMING: _recursive_pre_grad_passes:0.05197 _recursive_joint_graph_passes:0.46602 _recursive_post_grad_passes:0.10068 async_compile.wait:0.81153 code_gen:9.6114 inductor_compile:12.05884 backend_compile:17.68052 gc:0.00238 entire_frame_compile:20.74397 total_wall_time:20.74397 2025-08-14T21:56:24.8307244Z STATS: call_* op count: 517 | FakeTensorMode.__torch_dispatch__:32805 | FakeTensor.__torch_dispatch__:5139 | ProxyTorchDispatchMode.__torch_dispatch__:7226 2025-08-14T21:56:24.8307755Z Dynamo produced 1 graphs covering 517 ops with 0 graph breaks (0 unique) 2025-08-14T21:56:30.8715028Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:56:30.8716143Z from pkg_resources import resource_filename 2025-08-14T21:56:31.5369399Z 2025-08-14T21:56:35.2688673Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:56:35.2689023Z loading model: 0it [00:03, ?it/s] 2025-08-14T21:56:35.2707016Z cpu eval PegasusForCausalLM 2025-08-14T21:56:35.7243728Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:56:35.9127639Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:56:36.0651049Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:56:46.2310990Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2311321Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2311915Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2312132Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2312352Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2312574Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2312807Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2313027Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2313372Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2313603Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2313823Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2314056Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2314285Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2314523Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2314768Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2314993Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2315266Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2315487Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2315714Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2315992Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:46.2316423Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:46.2316811Z return mod(**inputs) 2025-08-14T21:56:46.2317291Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:56:46.2317765Z outputs = self.model.decoder( 2025-08-14T21:56:46.2318219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:56:46.2318677Z layer_outputs = decoder_layer( 2025-08-14T21:56:46.2319085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:46.2319556Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:46.2320019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:56:46.2320494Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:56:46.2320972Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:56:46.2321441Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:46.2321961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:56:46.2322550Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:56:46.2322758Z 2025-08-14T21:56:46.2322885Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:46.2323353Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:46.2323712Z return mod(**inputs) 2025-08-14T21:56:46.2324142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:56:46.2324585Z outputs = self.model.decoder( 2025-08-14T21:56:46.2325027Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:56:46.2325473Z layer_outputs = decoder_layer( 2025-08-14T21:56:46.2325857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:46.2326268Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:46.2326777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:56:46.2327260Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:56:46.2327728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:56:46.2328224Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:46.2328712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:56:46.2329230Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:56:46.2329464Z 2025-08-14T21:56:46.2329558Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2329794Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2330054Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:46.2330450Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:46.2330808Z return mod(**inputs) 2025-08-14T21:56:46.2331225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:56:46.2331670Z outputs = self.model.decoder( 2025-08-14T21:56:46.2332097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:56:46.2332535Z layer_outputs = decoder_layer( 2025-08-14T21:56:46.2332918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:46.2333320Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:46.2333772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T21:56:46.2334283Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:56:46.2334716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:56:46.2335085Z return self.act(input) 2025-08-14T21:56:46.2335216Z 2025-08-14T21:56:46.2335303Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2335541Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2335763Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2335991Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2336219Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2336445Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2336665Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2336890Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2337151Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:46.2337546Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:46.2337905Z return mod(**inputs) 2025-08-14T21:56:46.2338323Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:56:46.2338778Z outputs = self.model.decoder( 2025-08-14T21:56:46.2339205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:56:46.2339905Z layer_outputs = decoder_layer( 2025-08-14T21:56:46.2340303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:46.2340705Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:46.2341168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:56:46.2341634Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:56:46.2342485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:56:46.2342944Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:46.2343437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:56:46.2343999Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:56:46.2344205Z 2025-08-14T21:56:46.2344323Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:46.2344722Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:46.2345109Z return mod(**inputs) 2025-08-14T21:56:46.2345626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:56:46.2346069Z outputs = self.model.decoder( 2025-08-14T21:56:46.2346517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:56:46.2346966Z layer_outputs = decoder_layer( 2025-08-14T21:56:46.2347349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:46.2347739Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:46.2348193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:56:46.2348664Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:56:46.2349118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:56:46.2349579Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:46.2350064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:56:46.2350568Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:56:46.2350746Z 2025-08-14T21:56:46.2350834Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2351063Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2351321Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:46.2351709Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:46.2352067Z return mod(**inputs) 2025-08-14T21:56:46.2352481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:56:46.2352922Z outputs = self.model.decoder( 2025-08-14T21:56:46.2353347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:56:46.2353780Z layer_outputs = decoder_layer( 2025-08-14T21:56:46.2354164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:46.2354564Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:46.2354992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T21:56:46.2355484Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:56:46.2355912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:56:46.2356281Z return self.act(input) 2025-08-14T21:56:46.2356409Z 2025-08-14T21:56:46.2356495Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2356732Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2356960Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2357175Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2357400Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2357623Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2357862Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2358091Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2358349Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:46.2358735Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:46.2359121Z return mod(**inputs) 2025-08-14T21:56:46.2359538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:56:46.2359984Z outputs = self.model.decoder( 2025-08-14T21:56:46.2360414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:56:46.2361749Z layer_outputs = decoder_layer( 2025-08-14T21:56:46.2362123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:46.2362504Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:46.2362937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:56:46.2363390Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:56:46.2363847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:56:46.2364289Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:46.2364763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:56:46.2365279Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:56:46.2365474Z 2025-08-14T21:56:46.2365592Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:46.2365980Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:46.2366326Z return mod(**inputs) 2025-08-14T21:56:46.2366725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:56:46.2367147Z outputs = self.model.decoder( 2025-08-14T21:56:46.2367564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:56:46.2367993Z layer_outputs = decoder_layer( 2025-08-14T21:56:46.2368362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:46.2368739Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:46.2369165Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:56:46.2369614Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:56:46.2370067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:56:46.2370505Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:46.2370973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:56:46.2371462Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:56:46.2371635Z 2025-08-14T21:56:46.2371719Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2371948Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2372211Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:46.2372608Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:46.2372968Z return mod(**inputs) 2025-08-14T21:56:46.2373380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:56:46.2373845Z outputs = self.model.decoder( 2025-08-14T21:56:46.2374271Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:56:46.2374707Z layer_outputs = decoder_layer( 2025-08-14T21:56:46.2375115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:46.2375512Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:46.2375936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T21:56:46.2376463Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:56:46.2376895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:56:46.2377265Z return self.act(input) 2025-08-14T21:56:46.2377384Z 2025-08-14T21:56:46.2377470Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2377700Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2377926Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2378143Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2378368Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2378591Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2378805Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2379024Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2379279Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:46.2379773Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:46.2380148Z return mod(**inputs) 2025-08-14T21:56:46.2380565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:56:46.2381006Z outputs = self.model.decoder( 2025-08-14T21:56:46.2381429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:56:46.2381870Z layer_outputs = decoder_layer( 2025-08-14T21:56:46.2382258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:46.2382661Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:46.2383091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:56:46.2383568Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:56:46.2384038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:56:46.2384492Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:46.2384973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:56:46.2385506Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:56:46.2385709Z 2025-08-14T21:56:46.2385827Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:46.2386215Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:46.2386572Z return mod(**inputs) 2025-08-14T21:56:46.2386994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:56:46.2387443Z outputs = self.model.decoder( 2025-08-14T21:56:46.2387868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:56:46.2388316Z layer_outputs = decoder_layer( 2025-08-14T21:56:46.2388685Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:46.2389110Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:46.2389544Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:56:46.2390012Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:56:46.2390503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:56:46.2390970Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:46.2391459Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:56:46.2391999Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:56:46.2392174Z 2025-08-14T21:56:46.2392269Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2392605Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2392857Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:46.2393244Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:46.2393582Z return mod(**inputs) 2025-08-14T21:56:46.2393987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:56:46.2394414Z outputs = self.model.decoder( 2025-08-14T21:56:46.2394834Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:56:46.2395252Z layer_outputs = decoder_layer( 2025-08-14T21:56:46.2395657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:46.2396040Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:46.2396456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T21:56:46.2396929Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:56:46.2397342Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:56:46.2397699Z return self.act(input) 2025-08-14T21:56:46.2397821Z 2025-08-14T21:56:46.2397905Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2398128Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2398350Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2398564Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2398792Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2399022Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2399251Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2399468Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2399724Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:46.2400123Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:46.2400473Z return mod(**inputs) 2025-08-14T21:56:46.2400888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:56:46.2401333Z outputs = self.model.decoder( 2025-08-14T21:56:46.2401755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:56:46.2402191Z layer_outputs = decoder_layer( 2025-08-14T21:56:46.2402570Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:46.2402975Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:46.2403403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:56:46.2403874Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:56:46.2404396Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:56:46.2404858Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:46.2405331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:56:46.2405885Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:56:46.2406084Z 2025-08-14T21:56:46.2406206Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:46.2406594Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:46.2406995Z return mod(**inputs) 2025-08-14T21:56:46.2407412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:56:46.2407852Z outputs = self.model.decoder( 2025-08-14T21:56:46.2408279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:56:46.2408715Z layer_outputs = decoder_layer( 2025-08-14T21:56:46.2409099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:46.2409503Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:46.2409954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:56:46.2410421Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:56:46.2410885Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:56:46.2411337Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:46.2411827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:56:46.2412335Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:56:46.2412512Z 2025-08-14T21:56:46.2412609Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2412837Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2413099Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:46.2413495Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:46.2413844Z return mod(**inputs) 2025-08-14T21:56:46.2414263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:56:46.2414712Z outputs = self.model.decoder( 2025-08-14T21:56:46.2415145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:56:46.2415572Z layer_outputs = decoder_layer( 2025-08-14T21:56:46.2415974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:46.2416389Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:46.2416841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T21:56:46.2417329Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:56:46.2417759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:56:46.2418131Z return self.act(input) 2025-08-14T21:56:46.2418253Z 2025-08-14T21:56:46.2418340Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2418571Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2418797Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2419019Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2419234Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2419599Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2419844Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2420063Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2420322Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:46.2420746Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:46.2421106Z return mod(**inputs) 2025-08-14T21:56:46.2421520Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:56:46.2421957Z outputs = self.model.decoder( 2025-08-14T21:56:46.2422458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:56:46.2422884Z layer_outputs = decoder_layer( 2025-08-14T21:56:46.2423266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:46.2423665Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:46.2424099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:56:46.2424568Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:56:46.2425029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:56:46.2425483Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:46.2425960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:56:46.2426487Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:56:46.2426697Z 2025-08-14T21:56:46.2426810Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:46.2427207Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:46.2427555Z return mod(**inputs) 2025-08-14T21:56:46.2427992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:56:46.2428432Z outputs = self.model.decoder( 2025-08-14T21:56:46.2428867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:56:46.2429308Z layer_outputs = decoder_layer( 2025-08-14T21:56:46.2429691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:46.2430089Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:46.2430519Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:56:46.2430979Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:56:46.2431442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:56:46.2431901Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:46.2432374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:56:46.2432878Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:56:46.2433054Z 2025-08-14T21:56:46.2433152Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2433373Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2433635Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:46.2434036Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:46.2434391Z return mod(**inputs) 2025-08-14T21:56:46.2434829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:56:46.2435264Z outputs = self.model.decoder( 2025-08-14T21:56:46.2435698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:56:46.2436151Z layer_outputs = decoder_layer( 2025-08-14T21:56:46.2436532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:46.2436925Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:46.2437378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T21:56:46.2437868Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:56:46.2438289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:56:46.2438665Z return self.act(input) 2025-08-14T21:56:46.2438784Z 2025-08-14T21:56:46.2438880Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2439104Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2439326Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2439549Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2439767Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2439990Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2440210Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2440424Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2440676Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:46.2441073Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:46.2441418Z return mod(**inputs) 2025-08-14T21:56:46.2442116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:56:46.2442572Z outputs = self.model.decoder( 2025-08-14T21:56:46.2443009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:56:46.2443439Z layer_outputs = decoder_layer( 2025-08-14T21:56:46.2443828Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:46.2444231Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:46.2444669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:56:46.2445124Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:56:46.2445593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:56:46.2446053Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:46.2446531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:56:46.2447065Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:56:46.2447276Z 2025-08-14T21:56:46.2447392Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:46.2447789Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:46.2448136Z return mod(**inputs) 2025-08-14T21:56:46.2448559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:56:46.2449006Z outputs = self.model.decoder( 2025-08-14T21:56:46.2449447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:56:46.2449870Z layer_outputs = decoder_layer( 2025-08-14T21:56:46.2450361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:46.2450757Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:46.2451184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:56:46.2451682Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:56:46.2452142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:56:46.2452605Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:46.2453118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:56:46.2453645Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:56:46.2453819Z 2025-08-14T21:56:46.2453915Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2454147Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2454400Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:46.2454796Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:46.2455161Z return mod(**inputs) 2025-08-14T21:56:46.2455579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:56:46.2456011Z outputs = self.model.decoder( 2025-08-14T21:56:46.2456442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:56:46.2456882Z layer_outputs = decoder_layer( 2025-08-14T21:56:46.2457259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:46.2457654Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:46.2458091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T21:56:46.2458566Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:56:46.2459000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:56:46.2459377Z return self.act(input) 2025-08-14T21:56:46.2459575Z 2025-08-14T21:56:46.2459675Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2459899Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2460124Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2460348Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2460570Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2460796Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2461024Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2461242Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2461501Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:46.2461919Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:46.2462275Z return mod(**inputs) 2025-08-14T21:56:46.2462678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:56:46.2463116Z outputs = self.model.decoder( 2025-08-14T21:56:46.2463546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:56:46.2463974Z layer_outputs = decoder_layer( 2025-08-14T21:56:46.2464357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:46.2464753Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:46.2465189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:56:46.2465674Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:56:46.2466138Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:56:46.2466598Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:46.2467099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:56:46.2467611Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:56:46.2467839Z 2025-08-14T21:56:46.2467951Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:46.2468387Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:46.2468741Z return mod(**inputs) 2025-08-14T21:56:46.2469153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:56:46.2469605Z outputs = self.model.decoder( 2025-08-14T21:56:46.2470034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:56:46.2470469Z layer_outputs = decoder_layer( 2025-08-14T21:56:46.2470854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:46.2471251Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:46.2471684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:56:46.2472145Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:56:46.2472604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:56:46.2473059Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:46.2473532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:56:46.2474029Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:56:46.2474213Z 2025-08-14T21:56:46.2474309Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2474534Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2474779Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:46.2475164Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:46.2475510Z return mod(**inputs) 2025-08-14T21:56:46.2475905Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:56:46.2476332Z outputs = self.model.decoder( 2025-08-14T21:56:46.2476756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:56:46.2477191Z layer_outputs = decoder_layer( 2025-08-14T21:56:46.2477552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:46.2477933Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:46.2478367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T21:56:46.2478835Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:56:46.2479258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:56:46.2479630Z return self.act(input) 2025-08-14T21:56:46.2479751Z 2025-08-14T21:56:46.2479844Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2480067Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2480293Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2480518Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2480775Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2480997Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2481216Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2481423Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2481710Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:46.2482100Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:46.2482443Z return mod(**inputs) 2025-08-14T21:56:46.2482835Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:56:46.2483336Z outputs = self.model.decoder( 2025-08-14T21:56:46.2483754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:56:46.2484173Z layer_outputs = decoder_layer( 2025-08-14T21:56:46.2484537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:46.2484931Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:46.2485351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:56:46.2485795Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:56:46.2486244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:56:46.2486687Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:46.2487169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:56:46.2487670Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:56:46.2487873Z 2025-08-14T21:56:46.2487984Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:46.2488370Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:46.2488748Z return mod(**inputs) 2025-08-14T21:56:46.2489157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:56:46.2489581Z outputs = self.model.decoder( 2025-08-14T21:56:46.2490010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:56:46.2490436Z layer_outputs = decoder_layer( 2025-08-14T21:56:46.2490818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:46.2491203Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:46.2491628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:56:46.2492067Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:56:46.2492524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:56:46.2492977Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:46.2493443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:56:46.2493938Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:56:46.2494122Z 2025-08-14T21:56:46.2494208Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2494438Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2494690Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:46.2495081Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:46.2495433Z return mod(**inputs) 2025-08-14T21:56:46.2495862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:56:46.2496300Z outputs = self.model.decoder( 2025-08-14T21:56:46.2496733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:56:46.2497195Z layer_outputs = decoder_layer( 2025-08-14T21:56:46.2497570Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:46.2497964Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:46.2498443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T21:56:46.2498925Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:56:46.2499343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:56:46.2499818Z return self.act(input) 2025-08-14T21:56:46.2499942Z 2025-08-14T21:56:46.2500039Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2500263Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2500494Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2500729Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2500949Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2501177Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2501404Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2501631Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2501889Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:46.2502295Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:46.2502651Z return mod(**inputs) 2025-08-14T21:56:46.2503060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:56:46.2503507Z outputs = self.model.decoder( 2025-08-14T21:56:46.2503936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:56:46.2504375Z layer_outputs = decoder_layer( 2025-08-14T21:56:46.2504750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:46.2505147Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:46.2505587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:56:46.2506048Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:56:46.2506508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:56:46.2506969Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:46.2507457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:56:46.2507979Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:56:46.2508191Z 2025-08-14T21:56:46.2508305Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:46.2508702Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:46.2509055Z return mod(**inputs) 2025-08-14T21:56:46.2509471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:56:46.2509915Z outputs = self.model.decoder( 2025-08-14T21:56:46.2510351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:56:46.2510766Z layer_outputs = decoder_layer( 2025-08-14T21:56:46.2511169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:46.2511555Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:46.2511981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:56:46.2512437Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:56:46.2512881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:56:46.2513318Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:46.2513834Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:56:46.2514309Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:56:46.2514487Z 2025-08-14T21:56:46.2514571Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2514798Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2515041Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:46.2515424Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:46.2515781Z return mod(**inputs) 2025-08-14T21:56:46.2516191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:56:46.2516623Z outputs = self.model.decoder( 2025-08-14T21:56:46.2517074Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:56:46.2517496Z layer_outputs = decoder_layer( 2025-08-14T21:56:46.2517872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:46.2518269Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:46.2518698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T21:56:46.2519166Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:56:46.2519571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:56:46.2519957Z return self.act(input) 2025-08-14T21:56:46.2520077Z 2025-08-14T21:56:46.2520168Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2520387Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2520610Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2520834Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2521054Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2521268Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2521487Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2521704Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2521947Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:46.2522348Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:46.2522718Z return mod(**inputs) 2025-08-14T21:56:46.2523121Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:56:46.2523556Z outputs = self.model.decoder( 2025-08-14T21:56:46.2524003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:56:46.2524423Z layer_outputs = decoder_layer( 2025-08-14T21:56:46.2524789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:46.2525175Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:46.2525627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:56:46.2526076Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:56:46.2526509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:56:46.2526990Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:46.2527473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:56:46.2527993Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:56:46.2528218Z 2025-08-14T21:56:46.2528354Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:46.2528747Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:46.2529100Z return mod(**inputs) 2025-08-14T21:56:46.2529506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:56:46.2529944Z outputs = self.model.decoder( 2025-08-14T21:56:46.2530385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:56:46.2530820Z layer_outputs = decoder_layer( 2025-08-14T21:56:46.2531193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:46.2531587Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:46.2532024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:56:46.2532476Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:56:46.2532933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:56:46.2533394Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:46.2533869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:56:46.2534358Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:56:46.2534544Z 2025-08-14T21:56:46.2534631Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2534864Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2535122Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:46.2535515Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:46.2535870Z return mod(**inputs) 2025-08-14T21:56:46.2536283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:56:46.2536714Z outputs = self.model.decoder( 2025-08-14T21:56:46.2537149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:56:46.2537585Z layer_outputs = decoder_layer( 2025-08-14T21:56:46.2537966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:46.2538354Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:46.2538795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T21:56:46.2539275Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:56:46.2539816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:56:46.2540199Z return self.act(input) 2025-08-14T21:56:46.2540330Z 2025-08-14T21:56:46.2540417Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2540651Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2540875Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2541169Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2541398Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2541616Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2542069Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2542376Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2542640Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:46.2543039Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:46.2543404Z return mod(**inputs) 2025-08-14T21:56:46.2543845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:56:46.2544305Z outputs = self.model.decoder( 2025-08-14T21:56:46.2544737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:56:46.2545178Z layer_outputs = decoder_layer( 2025-08-14T21:56:46.2545552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:46.2545947Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:46.2546387Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:56:46.2546845Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:56:46.2547295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:56:46.2547754Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:46.2548238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:56:46.2548763Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:56:46.2548963Z 2025-08-14T21:56:46.2549079Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:46.2549472Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:46.2549823Z return mod(**inputs) 2025-08-14T21:56:46.2550228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:56:46.2550676Z outputs = self.model.decoder( 2025-08-14T21:56:46.2551118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:56:46.2551551Z layer_outputs = decoder_layer( 2025-08-14T21:56:46.2551927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:46.2552319Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:46.2552762Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:56:46.2553224Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:56:46.2553698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:56:46.2554159Z attn_output, attn_weights = attention_interface( 2025-08-14T21:56:46.2554640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:56:46.2555127Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:56:46.2555314Z 2025-08-14T21:56:46.2555403Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2555635Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2555896Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:46.2556294Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:46.2556721Z return mod(**inputs) 2025-08-14T21:56:46.2557157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-08-14T21:56:46.2557594Z outputs = self.model.decoder( 2025-08-14T21:56:46.2558043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:56:46.2558474Z layer_outputs = decoder_layer( 2025-08-14T21:56:46.2558857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:56:46.2559287Z return super().__call__(*args, **kwargs) 2025-08-14T21:56:46.2559743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T21:56:46.2560219Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:56:46.2560638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:56:46.2560996Z return self.act(input) 2025-08-14T21:56:46.2561123Z 2025-08-14T21:56:46.2561208Z cudagraph partition due to non gpu ops 2025-08-14T21:56:46.2561470Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:46.2561871Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:46.2562228Z return mod(**inputs) 2025-08-14T21:56:46.2562632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1650, in forward 2025-08-14T21:56:46.2563077Z logits = self.lm_head(outputs[0]) 2025-08-14T21:56:46.2563219Z 2025-08-14T21:56:46.2563330Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:56:46.2563726Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:56:46.2564068Z return mod(**inputs) 2025-08-14T21:56:46.2564467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1656, in forward 2025-08-14T21:56:46.2564984Z loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1)) 2025-08-14T21:56:46.2565208Z 2025-08-14T21:56:56.1879751Z Compilation time (from dynamo_timed): 18.793468798 2025-08-14T21:56:56.1902545Z pass 2025-08-14T21:56:56.1902890Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:56:56.1909132Z TIMING: _recursive_pre_grad_passes:0.03924 _recursive_joint_graph_passes:0.68952 _recursive_post_grad_passes:0.08161 async_compile.wait:0.77823 code_gen:10.00442 inductor_compile:11.81656 backend_compile:16.43107 gc:0.00029 entire_frame_compile:18.79347 total_wall_time:18.79347 2025-08-14T21:56:56.1910163Z STATS: call_* op count: 369 | FakeTensorMode.__torch_dispatch__:24794 | FakeTensor.__torch_dispatch__:3939 | ProxyTorchDispatchMode.__torch_dispatch__:5623 2025-08-14T21:56:56.1910713Z Dynamo produced 1 graphs covering 369 ops with 0 graph breaks (0 unique) 2025-08-14T21:57:02.3875081Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:57:02.3876114Z from pkg_resources import resource_filename 2025-08-14T21:57:03.0066095Z 2025-08-14T21:57:09.0904809Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:57:09.0909562Z loading model: 0it [00:06, ?it/s] 2025-08-14T21:57:09.0937603Z cpu eval PegasusForConditionalGeneration 2025-08-14T21:57:09.8507392Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:57:10.1545612Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:57:10.4476346Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:57:32.8725393Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8731256Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8731985Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8732328Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8733074Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8733448Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8733687Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8733928Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8734524Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8734762Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8735006Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8735242Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8735476Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8735711Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8735949Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8736185Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8736412Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8736655Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8736885Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8737152Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.8737587Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.8737966Z return mod(**inputs) 2025-08-14T21:57:32.8738432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.8738892Z outputs = self.model( 2025-08-14T21:57:32.8739329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T21:57:32.8740053Z encoder_outputs = self.encoder( 2025-08-14T21:57:32.8740549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T21:57:32.8741016Z layer_outputs = encoder_layer( 2025-08-14T21:57:32.8741426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.8742131Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.8742635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T21:57:32.8743126Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:57:32.8743600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.8744086Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.8744577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:32.8745118Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:32.8745347Z 2025-08-14T21:57:32.8745475Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.8745880Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.8746242Z return mod(**inputs) 2025-08-14T21:57:32.8746669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.8747113Z outputs = self.model( 2025-08-14T21:57:32.8747534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T21:57:32.8747986Z encoder_outputs = self.encoder( 2025-08-14T21:57:32.8748503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T21:57:32.8748985Z layer_outputs = encoder_layer( 2025-08-14T21:57:32.8749385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.8749854Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.8750302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T21:57:32.8750775Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:57:32.8751341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.8751829Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.8752327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:32.8752842Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:32.8753036Z 2025-08-14T21:57:32.8753131Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8753373Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8753637Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.8754052Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.8754433Z return mod(**inputs) 2025-08-14T21:57:32.8754857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.8755297Z outputs = self.model( 2025-08-14T21:57:32.8755710Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T21:57:32.8756163Z encoder_outputs = self.encoder( 2025-08-14T21:57:32.8756594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T21:57:32.8757026Z layer_outputs = encoder_layer( 2025-08-14T21:57:32.8757418Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.8757841Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.8758298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 323, in forward 2025-08-14T21:57:32.8758813Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:57:32.8759267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:57:32.8759641Z return self.act(input) 2025-08-14T21:57:32.8759765Z 2025-08-14T21:57:32.8759856Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8760096Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8760343Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8760561Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8760788Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8761017Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8761244Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8761544Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8761851Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.8762260Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.8762626Z return mod(**inputs) 2025-08-14T21:57:32.8763065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.8763504Z outputs = self.model( 2025-08-14T21:57:32.8763921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T21:57:32.8764399Z encoder_outputs = self.encoder( 2025-08-14T21:57:32.8764838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T21:57:32.8765277Z layer_outputs = encoder_layer( 2025-08-14T21:57:32.8765710Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.8766122Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.8766591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T21:57:32.8767143Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:57:32.8767611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.8768093Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.8768587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:32.8769111Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:32.8769314Z 2025-08-14T21:57:32.8769435Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.8769848Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.8770203Z return mod(**inputs) 2025-08-14T21:57:32.8770601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.8771032Z outputs = self.model( 2025-08-14T21:57:32.8771443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T21:57:32.8771882Z encoder_outputs = self.encoder( 2025-08-14T21:57:32.8772307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T21:57:32.8772745Z layer_outputs = encoder_layer( 2025-08-14T21:57:32.8773134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.8773540Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.8773996Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T21:57:32.8774458Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:57:32.8774910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.8775365Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.8775853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:32.8776361Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:32.8776544Z 2025-08-14T21:57:32.8776645Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8794617Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8795123Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.8795682Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.8796098Z return mod(**inputs) 2025-08-14T21:57:32.8796637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.8797166Z outputs = self.model( 2025-08-14T21:57:32.8797621Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T21:57:32.8798163Z encoder_outputs = self.encoder( 2025-08-14T21:57:32.8798960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T21:57:32.8799427Z layer_outputs = encoder_layer( 2025-08-14T21:57:32.8799911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.8800374Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.8800825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 323, in forward 2025-08-14T21:57:32.8801315Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:57:32.8801791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:57:32.8802209Z return self.act(input) 2025-08-14T21:57:32.8802361Z 2025-08-14T21:57:32.8802489Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8802746Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8802971Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8803292Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8803541Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8803756Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8804072Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8804313Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8804574Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.8804976Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.8805341Z return mod(**inputs) 2025-08-14T21:57:32.8805766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.8806197Z outputs = self.model( 2025-08-14T21:57:32.8806705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T21:57:32.8807151Z encoder_outputs = self.encoder( 2025-08-14T21:57:32.8807667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T21:57:32.8808158Z layer_outputs = encoder_layer( 2025-08-14T21:57:32.8808558Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.8809038Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.8809479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T21:57:32.8810042Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:57:32.8810730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.8811384Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.8811881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:32.8812530Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:32.8812745Z 2025-08-14T21:57:32.8812893Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.8813350Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.8813749Z return mod(**inputs) 2025-08-14T21:57:32.8814230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.8814710Z outputs = self.model( 2025-08-14T21:57:32.8815217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T21:57:32.8815658Z encoder_outputs = self.encoder( 2025-08-14T21:57:32.8816268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T21:57:32.8816719Z layer_outputs = encoder_layer( 2025-08-14T21:57:32.8817195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.8817636Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.8818173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T21:57:32.8818715Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:57:32.8819216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.8820077Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.8820664Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:32.8821177Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:32.8821379Z 2025-08-14T21:57:32.8821501Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8821767Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8822024Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.8822503Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.8822864Z return mod(**inputs) 2025-08-14T21:57:32.8823366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.8823805Z outputs = self.model( 2025-08-14T21:57:32.8824228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T21:57:32.8824668Z encoder_outputs = self.encoder( 2025-08-14T21:57:32.8825123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T21:57:32.8825603Z layer_outputs = encoder_layer( 2025-08-14T21:57:32.8826082Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.8826488Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.8826926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 323, in forward 2025-08-14T21:57:32.8827421Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:57:32.8827917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:57:32.8828358Z return self.act(input) 2025-08-14T21:57:32.8828489Z 2025-08-14T21:57:32.8828577Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8828812Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8829043Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8829269Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8829488Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8829713Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8829939Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8830160Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8830420Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.8830821Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.8831280Z return mod(**inputs) 2025-08-14T21:57:32.8831724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.8832240Z outputs = self.model( 2025-08-14T21:57:32.8832655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T21:57:32.8833127Z encoder_outputs = self.encoder( 2025-08-14T21:57:32.8833592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T21:57:32.8834027Z layer_outputs = encoder_layer( 2025-08-14T21:57:32.8834431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.8834845Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.8835357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T21:57:32.8835863Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:57:32.8837226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.8837713Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.8838210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:32.8838739Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:32.8838944Z 2025-08-14T21:57:32.8839060Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.8839467Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.8839849Z return mod(**inputs) 2025-08-14T21:57:32.8840257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.8840692Z outputs = self.model( 2025-08-14T21:57:32.8841106Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T21:57:32.8841544Z encoder_outputs = self.encoder( 2025-08-14T21:57:32.8842129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T21:57:32.8842574Z layer_outputs = encoder_layer( 2025-08-14T21:57:32.8842958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.8843366Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.8843797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T21:57:32.8844256Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:57:32.8844694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.8845131Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.8845610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:32.8846106Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:32.8846286Z 2025-08-14T21:57:32.8846382Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8846607Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8846864Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.8847262Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.8847613Z return mod(**inputs) 2025-08-14T21:57:32.8848024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.8848458Z outputs = self.model( 2025-08-14T21:57:32.8848871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T21:57:32.8849284Z encoder_outputs = self.encoder( 2025-08-14T21:57:32.8849843Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T21:57:32.8850275Z layer_outputs = encoder_layer( 2025-08-14T21:57:32.8850650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.8851095Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.8851539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 323, in forward 2025-08-14T21:57:32.8852042Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:57:32.8852524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:57:32.8852931Z return self.act(input) 2025-08-14T21:57:32.8853051Z 2025-08-14T21:57:32.8853147Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8853383Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8853604Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8853833Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8854057Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8854276Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8854504Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8854733Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8854985Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.8855385Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.8855743Z return mod(**inputs) 2025-08-14T21:57:32.8856168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.8856597Z outputs = self.model( 2025-08-14T21:57:32.8857009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T21:57:32.8857452Z encoder_outputs = self.encoder( 2025-08-14T21:57:32.8857872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T21:57:32.8858307Z layer_outputs = encoder_layer( 2025-08-14T21:57:32.8858693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.8859092Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.8859637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T21:57:32.8860136Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:57:32.8860599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.8861152Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.8861647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:32.8862174Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:32.8862375Z 2025-08-14T21:57:32.8862498Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.8862891Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.8863299Z return mod(**inputs) 2025-08-14T21:57:32.8863726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.8864161Z outputs = self.model( 2025-08-14T21:57:32.8864564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T21:57:32.8865004Z encoder_outputs = self.encoder( 2025-08-14T21:57:32.8865474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T21:57:32.8865924Z layer_outputs = encoder_layer( 2025-08-14T21:57:32.8866307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.8866738Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.8867189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T21:57:32.8867654Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:57:32.8868126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.8868635Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.8869121Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:32.8869610Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:32.8869797Z 2025-08-14T21:57:32.8869885Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8870433Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8870693Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.8871084Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.8871447Z return mod(**inputs) 2025-08-14T21:57:32.8871860Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.8872285Z outputs = self.model( 2025-08-14T21:57:32.8872708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T21:57:32.8873146Z encoder_outputs = self.encoder( 2025-08-14T21:57:32.8873592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T21:57:32.8874016Z layer_outputs = encoder_layer( 2025-08-14T21:57:32.8874402Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.8874805Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.8875253Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 323, in forward 2025-08-14T21:57:32.8875746Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:57:32.8876177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:57:32.8876550Z return self.act(input) 2025-08-14T21:57:32.8876670Z 2025-08-14T21:57:32.8876755Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8876987Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8877217Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8877435Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8877659Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8877889Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8878117Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8878346Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8878599Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.8878993Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.8879343Z return mod(**inputs) 2025-08-14T21:57:32.8879820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.8880260Z outputs = self.model( 2025-08-14T21:57:32.8880661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T21:57:32.8881096Z encoder_outputs = self.encoder( 2025-08-14T21:57:32.8881584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T21:57:32.8882015Z layer_outputs = encoder_layer( 2025-08-14T21:57:32.8882392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.8882818Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.8883258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T21:57:32.8883716Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:57:32.8884199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.8884662Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.8885151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:32.8885670Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:32.8885882Z 2025-08-14T21:57:32.8885997Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.8886398Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.8886755Z return mod(**inputs) 2025-08-14T21:57:32.8887163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.8887593Z outputs = self.model( 2025-08-14T21:57:32.8888007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T21:57:32.8888443Z encoder_outputs = self.encoder( 2025-08-14T21:57:32.8888863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T21:57:32.8889299Z layer_outputs = encoder_layer( 2025-08-14T21:57:32.8889685Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.8890075Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.8890516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T21:57:32.8890966Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:57:32.8891413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.8891879Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.8892351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:32.8892852Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:32.8893030Z 2025-08-14T21:57:32.8893127Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8893350Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8893607Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.8894005Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.8894355Z return mod(**inputs) 2025-08-14T21:57:32.8894765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.8895200Z outputs = self.model( 2025-08-14T21:57:32.8895618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T21:57:32.8896048Z encoder_outputs = self.encoder( 2025-08-14T21:57:32.8896475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T21:57:32.8896952Z layer_outputs = encoder_layer( 2025-08-14T21:57:32.8897338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.8897739Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.8898204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 323, in forward 2025-08-14T21:57:32.8898689Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:57:32.8899110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:57:32.8899670Z return self.act(input) 2025-08-14T21:57:32.8899840Z 2025-08-14T21:57:32.8899944Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8900181Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8900413Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8900641Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8900871Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8901093Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8901321Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8901549Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8901800Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.8902194Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.8902551Z return mod(**inputs) 2025-08-14T21:57:32.8902958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.8903394Z outputs = self.model( 2025-08-14T21:57:32.8903804Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T21:57:32.8904236Z encoder_outputs = self.encoder( 2025-08-14T21:57:32.8904678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T21:57:32.8905098Z layer_outputs = encoder_layer( 2025-08-14T21:57:32.8905471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.8905849Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.8906275Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T21:57:32.8906709Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:57:32.8907220Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.8907657Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.8908130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:32.8908674Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:32.8908882Z 2025-08-14T21:57:32.8909004Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.8909397Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.8909764Z return mod(**inputs) 2025-08-14T21:57:32.8910181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.8910611Z outputs = self.model( 2025-08-14T21:57:32.8911031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T21:57:32.8911480Z encoder_outputs = self.encoder( 2025-08-14T21:57:32.8911915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T21:57:32.8912397Z layer_outputs = encoder_layer( 2025-08-14T21:57:32.8912784Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.8913182Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.8913665Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T21:57:32.8914127Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:57:32.8914597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.8915096Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.8915581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:32.8916088Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:32.8916275Z 2025-08-14T21:57:32.8916364Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8916600Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8916851Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.8917250Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.8917621Z return mod(**inputs) 2025-08-14T21:57:32.8918026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.8918465Z outputs = self.model( 2025-08-14T21:57:32.8918881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T21:57:32.8919335Z encoder_outputs = self.encoder( 2025-08-14T21:57:32.8919773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T21:57:32.8920223Z layer_outputs = encoder_layer( 2025-08-14T21:57:32.8920614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.8921012Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.8921455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 323, in forward 2025-08-14T21:57:32.8922108Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:57:32.8922553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:57:32.8922921Z return self.act(input) 2025-08-14T21:57:32.8923047Z 2025-08-14T21:57:32.8923135Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8923367Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8923598Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8923818Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8924049Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8924277Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8924497Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8924723Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8924983Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.8925371Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.8925731Z return mod(**inputs) 2025-08-14T21:57:32.8926138Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.8926574Z outputs = self.model( 2025-08-14T21:57:32.8926974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T21:57:32.8927406Z encoder_outputs = self.encoder( 2025-08-14T21:57:32.8927875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T21:57:32.8928306Z layer_outputs = encoder_layer( 2025-08-14T21:57:32.8928693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.8929123Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.8929565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T21:57:32.8930022Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:57:32.8930523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.8930984Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.8931476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:32.8931998Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:32.8932208Z 2025-08-14T21:57:32.8932321Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.8932717Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.8933077Z return mod(**inputs) 2025-08-14T21:57:32.8933476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.8933908Z outputs = self.model( 2025-08-14T21:57:32.8934320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T21:57:32.8934759Z encoder_outputs = self.encoder( 2025-08-14T21:57:32.8935205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T21:57:32.8935650Z layer_outputs = encoder_layer( 2025-08-14T21:57:32.8936032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.8936425Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.8936862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T21:57:32.8937320Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:57:32.8937784Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.8938244Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.8938737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:32.8939239Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:32.8939414Z 2025-08-14T21:57:32.8939602Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8939881Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8940145Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.8940537Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.8940900Z return mod(**inputs) 2025-08-14T21:57:32.8941326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.8941754Z outputs = self.model( 2025-08-14T21:57:32.8942401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T21:57:32.8942843Z encoder_outputs = self.encoder( 2025-08-14T21:57:32.8943284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T21:57:32.8943821Z layer_outputs = encoder_layer( 2025-08-14T21:57:32.8944222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.8944615Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.8945093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 323, in forward 2025-08-14T21:57:32.8945575Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:57:32.8946002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:57:32.8946400Z return self.act(input) 2025-08-14T21:57:32.8946560Z 2025-08-14T21:57:32.8946653Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8946886Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8947107Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8947336Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8947565Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8947790Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8948006Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8948230Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8948492Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.8948882Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.8949244Z return mod(**inputs) 2025-08-14T21:57:32.8949656Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.8950089Z outputs = self.model( 2025-08-14T21:57:32.8950501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T21:57:32.8950942Z encoder_outputs = self.encoder( 2025-08-14T21:57:32.8951378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T21:57:32.8951806Z layer_outputs = encoder_layer( 2025-08-14T21:57:32.8952187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.8952597Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.8953028Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T21:57:32.8953481Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:57:32.8953936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.8954395Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.8954877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:32.8955404Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:32.8955615Z 2025-08-14T21:57:32.8955730Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.8956127Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.8956483Z return mod(**inputs) 2025-08-14T21:57:32.8956898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.8957333Z outputs = self.model( 2025-08-14T21:57:32.8957739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T21:57:32.8958182Z encoder_outputs = self.encoder( 2025-08-14T21:57:32.8958614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T21:57:32.8959086Z layer_outputs = encoder_layer( 2025-08-14T21:57:32.8959466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.8959863Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.8960334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T21:57:32.8960804Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:57:32.8961267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.8961777Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.8962264Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:32.8962754Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:32.8962937Z 2025-08-14T21:57:32.8963026Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8963256Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8963511Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.8963897Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.8964262Z return mod(**inputs) 2025-08-14T21:57:32.8964671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.8965093Z outputs = self.model( 2025-08-14T21:57:32.8965505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T21:57:32.8965936Z encoder_outputs = self.encoder( 2025-08-14T21:57:32.8966364Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T21:57:32.8966790Z layer_outputs = encoder_layer( 2025-08-14T21:57:32.8967172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.8967601Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.8968051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 323, in forward 2025-08-14T21:57:32.8968534Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:57:32.8968967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:57:32.8969349Z return self.act(input) 2025-08-14T21:57:32.8969467Z 2025-08-14T21:57:32.8969555Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8969785Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8970011Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8970236Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8970455Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8970677Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8970900Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8971116Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8971374Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.8971769Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.8972130Z return mod(**inputs) 2025-08-14T21:57:32.8972540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.8972975Z outputs = self.model( 2025-08-14T21:57:32.8973386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T21:57:32.8973819Z encoder_outputs = self.encoder( 2025-08-14T21:57:32.8974308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T21:57:32.8974756Z layer_outputs = encoder_layer( 2025-08-14T21:57:32.8975135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.8975553Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.8975992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T21:57:32.8976459Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:57:32.8976944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.8977440Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.8977922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:32.8978448Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:32.8978647Z 2025-08-14T21:57:32.8978762Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.8979157Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.8979617Z return mod(**inputs) 2025-08-14T21:57:32.8980135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.8980573Z outputs = self.model( 2025-08-14T21:57:32.8980994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T21:57:32.8981438Z encoder_outputs = self.encoder( 2025-08-14T21:57:32.8981866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T21:57:32.8982286Z layer_outputs = encoder_layer( 2025-08-14T21:57:32.8982662Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.8983040Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.8983469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T21:57:32.8983909Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:57:32.8984366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.8984821Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.8985307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:32.8985803Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:32.8985981Z 2025-08-14T21:57:32.8986079Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8986302Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8986562Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.8986955Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.8987303Z return mod(**inputs) 2025-08-14T21:57:32.8987712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.8988153Z outputs = self.model( 2025-08-14T21:57:32.8988564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T21:57:32.8988992Z encoder_outputs = self.encoder( 2025-08-14T21:57:32.8989413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T21:57:32.8989840Z layer_outputs = encoder_layer( 2025-08-14T21:57:32.8990249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.8990649Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.8991085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 323, in forward 2025-08-14T21:57:32.8991585Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:57:32.8992008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:57:32.8992380Z return self.act(input) 2025-08-14T21:57:32.8992519Z 2025-08-14T21:57:32.8992634Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8992870Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8993091Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8993320Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8993546Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8993767Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8993992Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8994217Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.8994466Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.8994868Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.8995230Z return mod(**inputs) 2025-08-14T21:57:32.8995636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.8996072Z outputs = self.model( 2025-08-14T21:57:32.8996484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T21:57:32.8996931Z encoder_outputs = self.encoder( 2025-08-14T21:57:32.8997356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T21:57:32.8997791Z layer_outputs = encoder_layer( 2025-08-14T21:57:32.8998177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.8998578Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.8999009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T21:57:32.8999465Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:57:32.8999917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9000369Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9000857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:32.9001379Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:32.9001583Z 2025-08-14T21:57:32.9001705Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9002106Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9002466Z return mod(**inputs) 2025-08-14T21:57:32.9002880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9003311Z outputs = self.model( 2025-08-14T21:57:32.9003719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T21:57:32.9004161Z encoder_outputs = self.encoder( 2025-08-14T21:57:32.9004592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T21:57:32.9005015Z layer_outputs = encoder_layer( 2025-08-14T21:57:32.9005464Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9005879Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9006320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T21:57:32.9006790Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:57:32.9007245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9007711Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9008235Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:32.9008725Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:32.9008907Z 2025-08-14T21:57:32.9008996Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9009232Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9009482Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9009876Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9010247Z return mod(**inputs) 2025-08-14T21:57:32.9010667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9011090Z outputs = self.model( 2025-08-14T21:57:32.9011505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T21:57:32.9011945Z encoder_outputs = self.encoder( 2025-08-14T21:57:32.9012367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T21:57:32.9012799Z layer_outputs = encoder_layer( 2025-08-14T21:57:32.9013186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9013593Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9014025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 323, in forward 2025-08-14T21:57:32.9014510Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:57:32.9014939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:57:32.9015313Z return self.act(input) 2025-08-14T21:57:32.9015442Z 2025-08-14T21:57:32.9015532Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9015764Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9016008Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9016228Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9016455Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9016682Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9016898Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9017124Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9017381Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9017777Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9018154Z return mod(**inputs) 2025-08-14T21:57:32.9018563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9018992Z outputs = self.model( 2025-08-14T21:57:32.9019397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T21:57:32.9020010Z encoder_outputs = self.encoder( 2025-08-14T21:57:32.9020486Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T21:57:32.9020933Z layer_outputs = encoder_layer( 2025-08-14T21:57:32.9021314Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9021741Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9022181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T21:57:32.9022638Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:57:32.9023156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9023654Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9024142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:32.9024660Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:32.9024871Z 2025-08-14T21:57:32.9024984Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9025377Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9025735Z return mod(**inputs) 2025-08-14T21:57:32.9026154Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9026580Z outputs = self.model( 2025-08-14T21:57:32.9026991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T21:57:32.9027413Z encoder_outputs = self.encoder( 2025-08-14T21:57:32.9027829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T21:57:32.9028252Z layer_outputs = encoder_layer( 2025-08-14T21:57:32.9028629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9029009Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9029436Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-08-14T21:57:32.9029885Z hidden_states, attn_weights = self.self_attn( 2025-08-14T21:57:32.9030326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9030783Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9031269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:32.9031752Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:32.9031927Z 2025-08-14T21:57:32.9032012Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9032239Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9032490Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9032866Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9033215Z return mod(**inputs) 2025-08-14T21:57:32.9033614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9034039Z outputs = self.model( 2025-08-14T21:57:32.9034440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-08-14T21:57:32.9034880Z encoder_outputs = self.encoder( 2025-08-14T21:57:32.9035307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-08-14T21:57:32.9035747Z layer_outputs = encoder_layer( 2025-08-14T21:57:32.9036127Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9036520Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9036956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 323, in forward 2025-08-14T21:57:32.9037450Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:57:32.9037879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:57:32.9038253Z return self.act(input) 2025-08-14T21:57:32.9038393Z 2025-08-14T21:57:32.9038506Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9038733Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9038964Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9039191Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9039410Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9039638Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9039863Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9040079Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9040334Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9040730Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9041085Z return mod(**inputs) 2025-08-14T21:57:32.9041491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9042149Z outputs = self.model( 2025-08-14T21:57:32.9042578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9043027Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9043482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9043933Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9044326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9044718Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9045166Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:57:32.9045650Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:32.9046139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9046597Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9047088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:32.9047622Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:32.9047826Z 2025-08-14T21:57:32.9047942Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9048340Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9048704Z return mod(**inputs) 2025-08-14T21:57:32.9049119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9049546Z outputs = self.model( 2025-08-14T21:57:32.9049960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9050406Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9050831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9051275Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9051755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9052157Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9052588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:57:32.9053080Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:32.9053549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9053990Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9054510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:32.9055009Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:32.9055187Z 2025-08-14T21:57:32.9055284Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9055512Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9055742Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9055969Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9056193Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9056414Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9056639Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9056864Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9057114Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9057512Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9057874Z return mod(**inputs) 2025-08-14T21:57:32.9058277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9058709Z outputs = self.model( 2025-08-14T21:57:32.9059118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9059681Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9060122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9060567Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9060955Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9061352Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9061791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T21:57:32.9062248Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:57:32.9062709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9063153Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9063635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:32.9064154Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:32.9064356Z 2025-08-14T21:57:32.9064477Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9064859Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9065214Z return mod(**inputs) 2025-08-14T21:57:32.9065637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9066068Z outputs = self.model( 2025-08-14T21:57:32.9066465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9066929Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9067359Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9067785Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9068187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9068584Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9069020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T21:57:32.9069523Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:57:32.9070007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9070476Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9070960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:32.9071452Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:32.9071635Z 2025-08-14T21:57:32.9071722Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9071957Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9072207Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9072598Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9072955Z return mod(**inputs) 2025-08-14T21:57:32.9073369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9073791Z outputs = self.model( 2025-08-14T21:57:32.9074201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9074635Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9075058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9075498Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9075883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9076289Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9076719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T21:57:32.9077206Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:57:32.9077638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:57:32.9078014Z return self.act(input) 2025-08-14T21:57:32.9078142Z 2025-08-14T21:57:32.9078228Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9078461Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9078689Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9078908Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9079135Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9079359Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9079574Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9079800Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9080059Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9080444Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9080804Z return mod(**inputs) 2025-08-14T21:57:32.9081211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9081642Z outputs = self.model( 2025-08-14T21:57:32.9082063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9082504Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9082935Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9083390Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9083766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9084163Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9084636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:57:32.9085111Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:32.9085594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9086067Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9086549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:32.9087066Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:32.9087285Z 2025-08-14T21:57:32.9087399Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9087797Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9088152Z return mod(**inputs) 2025-08-14T21:57:32.9088556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9088991Z outputs = self.model( 2025-08-14T21:57:32.9089412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9089843Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9090277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9090712Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9091097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9091487Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9091930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:57:32.9092394Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:32.9092845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9093307Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9093795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:32.9094295Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:32.9094472Z 2025-08-14T21:57:32.9094559Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9094793Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9095020Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9095240Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9095467Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9095692Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9095917Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9096137Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9096393Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9096793Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9097144Z return mod(**inputs) 2025-08-14T21:57:32.9097586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9098025Z outputs = self.model( 2025-08-14T21:57:32.9098434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9098884Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9099311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9099918Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9100355Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9100769Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9101214Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T21:57:32.9101705Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:57:32.9102167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9102628Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9103117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:32.9103640Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:32.9103842Z 2025-08-14T21:57:32.9103962Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9104361Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9104721Z return mod(**inputs) 2025-08-14T21:57:32.9105141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9105579Z outputs = self.model( 2025-08-14T21:57:32.9105983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9106411Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9106821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9107246Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9107624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9108012Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9108452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T21:57:32.9108929Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:57:32.9109389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9109826Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9110297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:32.9110778Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:32.9110952Z 2025-08-14T21:57:32.9111044Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9111264Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9111517Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9111907Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9112250Z return mod(**inputs) 2025-08-14T21:57:32.9112678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9113098Z outputs = self.model( 2025-08-14T21:57:32.9113497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9113949Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9114368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9114793Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9115164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9115593Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9116033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T21:57:32.9116521Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:57:32.9116941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:57:32.9117314Z return self.act(input) 2025-08-14T21:57:32.9117433Z 2025-08-14T21:57:32.9117525Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9117764Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9117981Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9118207Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9118428Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9118647Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9118871Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9119101Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9119350Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9119747Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9120103Z return mod(**inputs) 2025-08-14T21:57:32.9120507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9120945Z outputs = self.model( 2025-08-14T21:57:32.9121343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9121769Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9122180Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9122612Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9122999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9123406Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9123834Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:57:32.9124302Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:32.9124760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9125210Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9125691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:32.9126213Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:32.9126413Z 2025-08-14T21:57:32.9126543Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9126931Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9127293Z return mod(**inputs) 2025-08-14T21:57:32.9127742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9128176Z outputs = self.model( 2025-08-14T21:57:32.9128585Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9129057Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9129492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9129927Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9130320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9130756Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9131195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:57:32.9131646Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:32.9132106Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9132564Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9133050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:32.9133540Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:32.9133724Z 2025-08-14T21:57:32.9133815Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9134048Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9134269Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9134497Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9134722Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9134938Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9135162Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9135389Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9135645Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9136033Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9136394Z return mod(**inputs) 2025-08-14T21:57:32.9136806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9137227Z outputs = self.model( 2025-08-14T21:57:32.9137641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9138084Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9138520Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9138949Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9139335Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9139896Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9140337Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T21:57:32.9140816Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:57:32.9141289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9141956Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9142455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:32.9142992Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:32.9143200Z 2025-08-14T21:57:32.9143316Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9143806Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9144164Z return mod(**inputs) 2025-08-14T21:57:32.9144584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9145066Z outputs = self.model( 2025-08-14T21:57:32.9145484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9145934Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9146413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9146881Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9147261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9147668Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9148093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T21:57:32.9148561Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:57:32.9149011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9149455Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9149925Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:32.9150400Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:32.9150581Z 2025-08-14T21:57:32.9150666Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9150894Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9151148Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9151540Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9151898Z return mod(**inputs) 2025-08-14T21:57:32.9152310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9152735Z outputs = self.model( 2025-08-14T21:57:32.9153132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9153552Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9153966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9154379Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9154746Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9155142Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9155571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T21:57:32.9156060Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:57:32.9156493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:57:32.9156871Z return self.act(input) 2025-08-14T21:57:32.9156990Z 2025-08-14T21:57:32.9157078Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9157308Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9157539Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9157623Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9157715Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9157798Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9157880Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9157995Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9158113Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9158335Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9158439Z return mod(**inputs) 2025-08-14T21:57:32.9158743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9158820Z outputs = self.model( 2025-08-14T21:57:32.9159117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9159233Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9159530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9159609Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9159854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9159951Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9160238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:57:32.9160358Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:32.9160643Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9160751Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9161080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:32.9161219Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:32.9161223Z 2025-08-14T21:57:32.9161338Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9161573Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9161644Z return mod(**inputs) 2025-08-14T21:57:32.9161959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9162036Z outputs = self.model( 2025-08-14T21:57:32.9162330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9162419Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9162705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9162793Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9163032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9163121Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9163417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:57:32.9163525Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:32.9163811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9163928Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9164242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:32.9164369Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:32.9164373Z 2025-08-14T21:57:32.9164459Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9164545Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9164636Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9164738Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9164825Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9164915Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9164995Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9165106Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9165219Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9165438Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9165517Z return mod(**inputs) 2025-08-14T21:57:32.9165823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9165914Z outputs = self.model( 2025-08-14T21:57:32.9166222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9166302Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9166599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9166677Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9166918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9167016Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9167302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T21:57:32.9167420Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:57:32.9167717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9167821Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9168141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:32.9168280Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:32.9168284Z 2025-08-14T21:57:32.9168397Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9168623Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9168695Z return mod(**inputs) 2025-08-14T21:57:32.9169001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9169074Z outputs = self.model( 2025-08-14T21:57:32.9169362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9169450Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9169738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9169816Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9170063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9170152Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9170445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T21:57:32.9170560Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:57:32.9170864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9170976Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9171286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:32.9171431Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:32.9171436Z 2025-08-14T21:57:32.9171522Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9171605Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9171724Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9171965Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9172037Z return mod(**inputs) 2025-08-14T21:57:32.9172336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9172433Z outputs = self.model( 2025-08-14T21:57:32.9172875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9172976Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9173326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9173420Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9173659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9173757Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9174044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T21:57:32.9174175Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:57:32.9174414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:57:32.9174493Z return self.act(input) 2025-08-14T21:57:32.9174497Z 2025-08-14T21:57:32.9174583Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9174675Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9174758Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9174849Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9174930Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9175010Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9175099Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9175181Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9175291Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9175514Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9175586Z return mod(**inputs) 2025-08-14T21:57:32.9175874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9175956Z outputs = self.model( 2025-08-14T21:57:32.9176252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9176337Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9176616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9176693Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9176934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9177021Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9177311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:57:32.9177415Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:32.9177696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9177809Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9178154Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:32.9178293Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:32.9178306Z 2025-08-14T21:57:32.9178415Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9178649Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9178727Z return mod(**inputs) 2025-08-14T21:57:32.9179004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9179075Z outputs = self.model( 2025-08-14T21:57:32.9179401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9179554Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9179897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9179977Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9180218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9180316Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9180601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:57:32.9180711Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:32.9181007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9181121Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9181433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:32.9181550Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:32.9181555Z 2025-08-14T21:57:32.9181639Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9181732Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9181813Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9181894Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9181983Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9182062Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9182150Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9182232Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9182341Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9182562Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9182632Z return mod(**inputs) 2025-08-14T21:57:32.9182912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9182995Z outputs = self.model( 2025-08-14T21:57:32.9183276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9183360Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9183638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9183714Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9183952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9184039Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9184314Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T21:57:32.9184435Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:57:32.9184730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9184840Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9185145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:32.9185305Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:32.9185308Z 2025-08-14T21:57:32.9185425Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9185636Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9185756Z return mod(**inputs) 2025-08-14T21:57:32.9186035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9186107Z outputs = self.model( 2025-08-14T21:57:32.9186393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9186472Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9186754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9186840Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9187072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9187165Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9187441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T21:57:32.9187556Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:57:32.9187838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9187940Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9188258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:32.9188373Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:32.9188376Z 2025-08-14T21:57:32.9188460Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9188550Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9188659Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9188869Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9188950Z return mod(**inputs) 2025-08-14T21:57:32.9189226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9189305Z outputs = self.model( 2025-08-14T21:57:32.9189582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9189660Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9189944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9190023Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9190256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9190348Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9190624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T21:57:32.9190760Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:57:32.9190984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:57:32.9191079Z return self.act(input) 2025-08-14T21:57:32.9191083Z 2025-08-14T21:57:32.9191175Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9191255Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9191341Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9191440Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9191519Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9191605Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9191685Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9191763Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9191880Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9192158Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9192230Z return mod(**inputs) 2025-08-14T21:57:32.9192517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9192590Z outputs = self.model( 2025-08-14T21:57:32.9192876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9192955Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9193234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9193319Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9193549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9193638Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9193927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:57:32.9194032Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:32.9194315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9194418Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9194723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:32.9194869Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:32.9194873Z 2025-08-14T21:57:32.9194979Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9195194Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9195267Z return mod(**inputs) 2025-08-14T21:57:32.9195557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9195638Z outputs = self.model( 2025-08-14T21:57:32.9195915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9196003Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9196277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9196355Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9196595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9196679Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9196953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:57:32.9197065Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:32.9197340Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9197470Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9197783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:32.9197899Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:32.9197922Z 2025-08-14T21:57:32.9198018Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9198103Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9198184Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9198273Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9198374Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9198492Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9198576Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9198657Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9198775Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9198993Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9199065Z return mod(**inputs) 2025-08-14T21:57:32.9199362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9199438Z outputs = self.model( 2025-08-14T21:57:32.9199731Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9199811Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9200100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9200188Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9200428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9200513Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9200814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T21:57:32.9200930Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:57:32.9201222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9201328Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9201650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:32.9201803Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:32.9201806Z 2025-08-14T21:57:32.9201918Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9202140Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9202212Z return mod(**inputs) 2025-08-14T21:57:32.9202513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9202594Z outputs = self.model( 2025-08-14T21:57:32.9202881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9202963Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9203258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9203336Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9203587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9203673Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9203982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T21:57:32.9204109Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:57:32.9204393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9204527Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9204841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:32.9204955Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:32.9204959Z 2025-08-14T21:57:32.9205105Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9205193Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9205306Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9205535Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9205609Z return mod(**inputs) 2025-08-14T21:57:32.9205906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9205980Z outputs = self.model( 2025-08-14T21:57:32.9206270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9206360Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9206651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9206732Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9206983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9207069Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9207363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T21:57:32.9207496Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:57:32.9207727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:57:32.9207815Z return self.act(input) 2025-08-14T21:57:32.9207819Z 2025-08-14T21:57:32.9207904Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9207995Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9208077Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9208160Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9208253Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9208336Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9208417Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9208510Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9208623Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9208842Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9208923Z return mod(**inputs) 2025-08-14T21:57:32.9209213Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9209296Z outputs = self.model( 2025-08-14T21:57:32.9209584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9209665Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9209961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9210041Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9210287Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9210381Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9210692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:57:32.9210813Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:32.9211116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9211222Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9211545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:32.9211724Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:32.9211728Z 2025-08-14T21:57:32.9211850Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9212068Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9212141Z return mod(**inputs) 2025-08-14T21:57:32.9212441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9212525Z outputs = self.model( 2025-08-14T21:57:32.9212803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9212892Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9213170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9213256Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9213491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9213574Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9213863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:57:32.9213968Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:32.9214252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9214354Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9214654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:32.9214775Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:32.9214779Z 2025-08-14T21:57:32.9214865Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9214950Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9215040Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9215122Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9215211Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9215293Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9215376Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9215465Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9215577Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9215795Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9215874Z return mod(**inputs) 2025-08-14T21:57:32.9216160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9216240Z outputs = self.model( 2025-08-14T21:57:32.9216530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9216612Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9216907Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9217007Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9217248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9217343Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9217648Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T21:57:32.9217775Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:57:32.9218061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9218199Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9218523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:32.9218667Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:32.9218672Z 2025-08-14T21:57:32.9218790Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9219009Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9219084Z return mod(**inputs) 2025-08-14T21:57:32.9219376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9219451Z outputs = self.model( 2025-08-14T21:57:32.9219895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9220006Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9220293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9220382Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9220625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9220712Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9221023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T21:57:32.9221137Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:57:32.9221426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9221529Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9221835Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:32.9221958Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:32.9221964Z 2025-08-14T21:57:32.9222047Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9222131Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9222250Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9222467Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9222549Z return mod(**inputs) 2025-08-14T21:57:32.9222834Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9222907Z outputs = self.model( 2025-08-14T21:57:32.9223199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9223283Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9223577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9223676Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9223939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9224033Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9224318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T21:57:32.9224470Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:57:32.9224711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:57:32.9224787Z return self.act(input) 2025-08-14T21:57:32.9224791Z 2025-08-14T21:57:32.9224913Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9225015Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9225098Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9225185Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9225266Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9225348Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9225439Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9225518Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9225629Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9225859Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9225929Z return mod(**inputs) 2025-08-14T21:57:32.9226226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9226300Z outputs = self.model( 2025-08-14T21:57:32.9226591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9226680Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9226964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9227043Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9227289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9227376Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9227674Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:57:32.9227783Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:32.9228072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9228188Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9228502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:32.9228649Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:32.9228655Z 2025-08-14T21:57:32.9228767Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9228989Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9229068Z return mod(**inputs) 2025-08-14T21:57:32.9229350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9229422Z outputs = self.model( 2025-08-14T21:57:32.9229716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9229800Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9230108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9230186Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9230447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9230542Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9230828Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:57:32.9230961Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:32.9231244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9231346Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9231699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:32.9231817Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:32.9231820Z 2025-08-14T21:57:32.9231913Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9231999Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9232082Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9232171Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9232253Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9232339Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9232426Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9232507Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9232617Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9232845Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9232920Z return mod(**inputs) 2025-08-14T21:57:32.9233215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9233289Z outputs = self.model( 2025-08-14T21:57:32.9233576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9233666Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9233952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9234032Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9234280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9234367Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9234660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T21:57:32.9234780Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:57:32.9235117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9235231Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9235543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:32.9235691Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:32.9235697Z 2025-08-14T21:57:32.9235809Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9236032Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9236112Z return mod(**inputs) 2025-08-14T21:57:32.9236404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9236480Z outputs = self.model( 2025-08-14T21:57:32.9236773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9236854Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9237165Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9237243Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9237484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9237599Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9237882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T21:57:32.9238000Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:57:32.9238328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9238434Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9238756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:32.9238869Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:32.9238872Z 2025-08-14T21:57:32.9238959Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9239057Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9239171Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9239397Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9239469Z return mod(**inputs) 2025-08-14T21:57:32.9239757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9239841Z outputs = self.model( 2025-08-14T21:57:32.9240127Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9240207Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9240504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9240581Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9240830Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9240915Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9241202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T21:57:32.9241341Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:57:32.9241574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:57:32.9241658Z return self.act(input) 2025-08-14T21:57:32.9241662Z 2025-08-14T21:57:32.9241746Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9242052Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9242152Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9242235Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9242316Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9242409Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9242491Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9242573Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9242695Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9242912Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9242998Z return mod(**inputs) 2025-08-14T21:57:32.9243289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9243364Z outputs = self.model( 2025-08-14T21:57:32.9243759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9243843Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9244130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9244253Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9244494Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9244589Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9244913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:57:32.9245049Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:32.9245348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9245455Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9245777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:32.9245918Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:32.9245925Z 2025-08-14T21:57:32.9246036Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9246262Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9246332Z return mod(**inputs) 2025-08-14T21:57:32.9246643Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9246725Z outputs = self.model( 2025-08-14T21:57:32.9247002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9247087Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9247368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9247443Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9247685Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9247770Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9248062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:57:32.9248166Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:32.9248447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9248555Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9248864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:32.9248976Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:32.9248987Z 2025-08-14T21:57:32.9249071Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9249155Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9249241Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9249321Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9249400Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9249487Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9249566Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9249650Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9249765Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9249978Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9250054Z return mod(**inputs) 2025-08-14T21:57:32.9250371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9250444Z outputs = self.model( 2025-08-14T21:57:32.9250729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9250835Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9251126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9251214Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9251499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9251593Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9251895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T21:57:32.9252011Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:57:32.9252301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9252403Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9252710Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:32.9252861Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:32.9252865Z 2025-08-14T21:57:32.9252978Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9253204Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9253276Z return mod(**inputs) 2025-08-14T21:57:32.9253566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9253652Z outputs = self.model( 2025-08-14T21:57:32.9253940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9254028Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9254318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9254399Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9254652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9254741Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9255032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T21:57:32.9255157Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:57:32.9255443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9255556Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9255870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:32.9255986Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:32.9255990Z 2025-08-14T21:57:32.9256085Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9256170Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9256291Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9256513Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9256585Z return mod(**inputs) 2025-08-14T21:57:32.9256901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9256977Z outputs = self.model( 2025-08-14T21:57:32.9257260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9257370Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9257658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9257743Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9257984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9258106Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9258402Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T21:57:32.9258531Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:57:32.9258771Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:57:32.9258846Z return self.act(input) 2025-08-14T21:57:32.9258850Z 2025-08-14T21:57:32.9258935Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9259030Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9259113Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9259195Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9259284Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9259366Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9259448Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9259655Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9259793Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9260030Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9260108Z return mod(**inputs) 2025-08-14T21:57:32.9260405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9260494Z outputs = self.model( 2025-08-14T21:57:32.9260785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9260874Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9261172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9261255Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9261513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9261606Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9261902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:57:32.9262026Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:32.9262317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9262436Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9262753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:32.9262898Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:32.9262903Z 2025-08-14T21:57:32.9263029Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9263254Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9263329Z return mod(**inputs) 2025-08-14T21:57:32.9263634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9263739Z outputs = self.model( 2025-08-14T21:57:32.9264031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9264112Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9264428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9264515Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9264771Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9264897Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9265182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:57:32.9265288Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:32.9265578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9265681Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9265992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:32.9266118Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:32.9266122Z 2025-08-14T21:57:32.9266209Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9266300Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9266384Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9266467Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9266555Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9266637Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9266718Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9266807Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9266919Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9267144Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9267216Z return mod(**inputs) 2025-08-14T21:57:32.9267503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9267586Z outputs = self.model( 2025-08-14T21:57:32.9267870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9267953Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9268248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9268325Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9268574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9268660Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9268950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T21:57:32.9269068Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:57:32.9269327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9269421Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9269730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:32.9269867Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:32.9269871Z 2025-08-14T21:57:32.9269987Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9270220Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9270292Z return mod(**inputs) 2025-08-14T21:57:32.9270577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9270666Z outputs = self.model( 2025-08-14T21:57:32.9270949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9271028Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9271321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9271423Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9271661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9271746Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9272032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T21:57:32.9272145Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:57:32.9272428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9272523Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9272809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:32.9272929Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:32.9272932Z 2025-08-14T21:57:32.9273012Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9273099Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9273202Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9273405Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9273479Z return mod(**inputs) 2025-08-14T21:57:32.9273741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9273810Z outputs = self.model( 2025-08-14T21:57:32.9274080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9274154Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9274437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9274509Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9274727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9274815Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9275075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T21:57:32.9275194Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:57:32.9275413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:57:32.9275481Z return self.act(input) 2025-08-14T21:57:32.9275486Z 2025-08-14T21:57:32.9275572Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9275649Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9275727Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9275812Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9275888Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9275962Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9276046Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9276121Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9276260Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9276465Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9276529Z return mod(**inputs) 2025-08-14T21:57:32.9276824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9276892Z outputs = self.model( 2025-08-14T21:57:32.9277154Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9277275Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9277550Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9277633Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9277862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9277940Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9278212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:57:32.9278316Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:32.9278581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9278685Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9278976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:32.9279113Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:32.9279117Z 2025-08-14T21:57:32.9279219Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9279422Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9279494Z return mod(**inputs) 2025-08-14T21:57:32.9279757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9279834Z outputs = self.model( 2025-08-14T21:57:32.9280098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9280171Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9280449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9280523Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9280742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9280830Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9281092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:57:32.9281198Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:32.9281462Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9281556Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9281859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:32.9281971Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:32.9281974Z 2025-08-14T21:57:32.9282064Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9282143Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9282219Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9282319Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9282397Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9282471Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9282559Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9282634Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9282754Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9282964Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9283030Z return mod(**inputs) 2025-08-14T21:57:32.9283319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9283403Z outputs = self.model( 2025-08-14T21:57:32.9283665Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9283748Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9284014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9284093Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9284314Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9284412Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9284681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T21:57:32.9284791Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:57:32.9285056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9285163Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9285454Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:32.9285594Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:32.9285598Z 2025-08-14T21:57:32.9285699Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9285904Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9285979Z return mod(**inputs) 2025-08-14T21:57:32.9286245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9286323Z outputs = self.model( 2025-08-14T21:57:32.9286587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9286662Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9286937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9287011Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9287230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9287316Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9287579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T21:57:32.9287694Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:57:32.9287956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9288055Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9288352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:32.9288458Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:32.9288478Z 2025-08-14T21:57:32.9288569Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9288646Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9288748Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9288974Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9289040Z return mod(**inputs) 2025-08-14T21:57:32.9289310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9289386Z outputs = self.model( 2025-08-14T21:57:32.9289690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9289773Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9290040Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9290111Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9290338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9290417Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9290675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T21:57:32.9290801Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:57:32.9291012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:57:32.9291092Z return self.act(input) 2025-08-14T21:57:32.9291096Z 2025-08-14T21:57:32.9291174Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9291249Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9291332Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9291409Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9291483Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9291565Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9291645Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9291727Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9291830Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9292028Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9292102Z return mod(**inputs) 2025-08-14T21:57:32.9292366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9292435Z outputs = self.model( 2025-08-14T21:57:32.9292701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9292775Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9293049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9293121Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9293338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9293426Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9293684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:57:32.9293782Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:32.9294054Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9294149Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9294460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:32.9294591Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:32.9294594Z 2025-08-14T21:57:32.9294697Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9294920Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9294984Z return mod(**inputs) 2025-08-14T21:57:32.9295258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9295326Z outputs = self.model( 2025-08-14T21:57:32.9295646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9295730Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9296001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9296079Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9296320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9296403Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9296697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-08-14T21:57:32.9296803Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T21:57:32.9297083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9297193Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9297495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:32.9297616Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:32.9297621Z 2025-08-14T21:57:32.9297706Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9297787Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9297875Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9297956Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9298034Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9298122Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9298200Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9298288Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9298397Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9298612Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9298689Z return mod(**inputs) 2025-08-14T21:57:32.9298967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9299041Z outputs = self.model( 2025-08-14T21:57:32.9299330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9299409Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9299869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9299965Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9300209Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9300309Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9300598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T21:57:32.9300718Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:57:32.9301058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9301167Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9301496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-08-14T21:57:32.9301657Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:57:32.9301662Z 2025-08-14T21:57:32.9301771Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9301993Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9302103Z return mod(**inputs) 2025-08-14T21:57:32.9302440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9302512Z outputs = self.model( 2025-08-14T21:57:32.9302790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9302876Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9303151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9303229Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9303472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9303556Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9303839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-08-14T21:57:32.9303956Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-08-14T21:57:32.9304230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-08-14T21:57:32.9304340Z attn_output, attn_weights = attention_interface( 2025-08-14T21:57:32.9304641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-08-14T21:57:32.9304761Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:57:32.9304766Z 2025-08-14T21:57:32.9304850Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9304933Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9305046Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9305255Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9305327Z return mod(**inputs) 2025-08-14T21:57:32.9305610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-08-14T21:57:32.9305682Z outputs = self.model( 2025-08-14T21:57:32.9305963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-08-14T21:57:32.9306041Z decoder_outputs = self.decoder( 2025-08-14T21:57:32.9306317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-08-14T21:57:32.9306401Z layer_outputs = decoder_layer( 2025-08-14T21:57:32.9306631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:57:32.9306713Z return super().__call__(*args, **kwargs) 2025-08-14T21:57:32.9307001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 438, in forward 2025-08-14T21:57:32.9307127Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T21:57:32.9307356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:57:32.9307429Z return self.act(input) 2025-08-14T21:57:32.9307452Z 2025-08-14T21:57:32.9307535Z cudagraph partition due to non gpu ops 2025-08-14T21:57:32.9307648Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9307860Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9307960Z return mod(**inputs) 2025-08-14T21:57:32.9308238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1489, in forward 2025-08-14T21:57:32.9308365Z lm_logits = self.lm_head(outputs[0]) + self.final_logits_bias 2025-08-14T21:57:32.9308385Z 2025-08-14T21:57:32.9308557Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:57:32.9308768Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:57:32.9308837Z return mod(**inputs) 2025-08-14T21:57:32.9309147Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1494, in forward 2025-08-14T21:57:32.9309324Z masked_lm_loss = loss_fct(lm_logits.view(-1, self.config.vocab_size), labels.view(-1)) 2025-08-14T21:57:32.9309329Z 2025-08-14T21:57:46.1700692Z Compilation time (from dynamo_timed): 34.23344719 2025-08-14T21:57:46.1721669Z pass 2025-08-14T21:57:46.1726320Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:57:46.1727158Z TIMING: _recursive_pre_grad_passes:0.09383 _recursive_joint_graph_passes:1.19576 _recursive_post_grad_passes:0.17432 async_compile.wait:0.82055 code_gen:12.63843 inductor_compile:16.78799 backend_compile:28.05847 gc:0.00114 entire_frame_compile:34.23345 total_wall_time:34.23345 2025-08-14T21:57:46.1728177Z STATS: call_* op count: 965 | FakeTensorMode.__torch_dispatch__:63082 | FakeTensor.__torch_dispatch__:9680 | ProxyTorchDispatchMode.__torch_dispatch__:13875 2025-08-14T21:57:46.1728682Z Dynamo produced 1 graphs covering 965 ops with 0 graph breaks (0 unique) 2025-08-14T21:57:52.7650547Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:57:52.7651602Z from pkg_resources import resource_filename 2025-08-14T21:57:53.3565338Z 2025-08-14T21:57:53.3681588Z loading model: 0it [00:00, ?it/s]If you want to use `RobertaLMHeadModel` as a standalone, add `is_decoder=True.` 2025-08-14T21:57:53.3684402Z WARNING:transformers.models.roberta.modeling_roberta:If you want to use `RobertaLMHeadModel` as a standalone, add `is_decoder=True.` 2025-08-14T21:57:54.8221921Z We strongly recommend passing in an `attention_mask` since your input_ids may be padded. See https://huggingface.co/docs/transformers/troubleshooting#incorrect-output-when-padding-tokens-arent-masked. 2025-08-14T21:57:54.8222909Z You may ignore this warning if your `pad_token_id` (0) is identical to the `bos_token_id` (0), `eos_token_id` (2), or the `sep_token_id` (None), and your input is not padded. 2025-08-14T21:57:54.8223895Z WARNING:transformers.modeling_utils:We strongly recommend passing in an `attention_mask` since your input_ids may be padded. See https://huggingface.co/docs/transformers/troubleshooting#incorrect-output-when-padding-tokens-arent-masked. 2025-08-14T21:57:54.8224901Z You may ignore this warning if your `pad_token_id` (0) is identical to the `bos_token_id` (0), `eos_token_id` (2), or the `sep_token_id` (None), and your input is not padded. 2025-08-14T21:57:55.0025759Z 2025-08-14T21:57:55.0026800Z loading model: 0it [00:01, ?it/s] 2025-08-14T21:57:55.0040172Z cpu eval RobertaForCausalLM 2025-08-14T21:57:55.6117474Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:57:55.9599513Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:57:56.2536135Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:58:06.3010488Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:06.3012457Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:06.3012863Z return mod(**inputs) 2025-08-14T21:58:06.3013340Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T21:58:06.3013860Z outputs = self.roberta( 2025-08-14T21:58:06.3014355Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 826, in forward 2025-08-14T21:58:06.3014825Z embedding_output = self.embeddings( 2025-08-14T21:58:06.3015256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 89, in forward 2025-08-14T21:58:06.3015839Z position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx, past_key_values_length) 2025-08-14T21:58:06.3016505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1576, in create_position_ids_from_input_ids 2025-08-14T21:58:06.3017033Z mask = input_ids.ne(padding_idx).int() 2025-08-14T21:58:06.3017265Z 2025-08-14T21:58:06.3017397Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3017633Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3017853Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3018070Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3018348Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3018562Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3018784Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3019001Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3019212Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3019430Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3019771Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3019993Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3020252Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:06.3020658Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:06.3021022Z return mod(**inputs) 2025-08-14T21:58:06.3021436Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T21:58:06.3021873Z outputs = self.roberta( 2025-08-14T21:58:06.3022272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 826, in forward 2025-08-14T21:58:06.3022678Z embedding_output = self.embeddings( 2025-08-14T21:58:06.3023072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 89, in forward 2025-08-14T21:58:06.3023636Z position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx, past_key_values_length) 2025-08-14T21:58:06.3024262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1577, in create_position_ids_from_input_ids 2025-08-14T21:58:06.3024894Z incremental_indices = (torch.cumsum(mask, dim=1).type_as(mask) + past_key_values_length) * mask 2025-08-14T21:58:06.3025153Z 2025-08-14T21:58:06.3025266Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:06.3025658Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:06.3026008Z return mod(**inputs) 2025-08-14T21:58:06.3026407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T21:58:06.3026857Z outputs = self.roberta( 2025-08-14T21:58:06.3027244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 826, in forward 2025-08-14T21:58:06.3027652Z embedding_output = self.embeddings( 2025-08-14T21:58:06.3028067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 89, in forward 2025-08-14T21:58:06.3028619Z position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx, past_key_values_length) 2025-08-14T21:58:06.3029257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1577, in create_position_ids_from_input_ids 2025-08-14T21:58:06.3029904Z incremental_indices = (torch.cumsum(mask, dim=1).type_as(mask) + past_key_values_length) * mask 2025-08-14T21:58:06.3030160Z 2025-08-14T21:58:06.3030246Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3030473Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3030695Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3030905Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3031124Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3031348Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3031556Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3031786Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:06.3032176Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:06.3032527Z return mod(**inputs) 2025-08-14T21:58:06.3032918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T21:58:06.3033333Z outputs = self.roberta( 2025-08-14T21:58:06.3033714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:06.3034122Z encoder_outputs = self.encoder( 2025-08-14T21:58:06.3034533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:06.3034950Z layer_outputs = layer_module( 2025-08-14T21:58:06.3035354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:06.3035747Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:06.3036183Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T21:58:06.3036617Z self_attention_outputs = self.attention( 2025-08-14T21:58:06.3037031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:06.3037439Z return func(*args, **kwargs) 2025-08-14T21:58:06.3037841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T21:58:06.3038258Z self_outputs = self.self( 2025-08-14T21:58:06.3038644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:06.3039033Z return func(*args, **kwargs) 2025-08-14T21:58:06.3039435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T21:58:06.3039919Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:06.3040113Z 2025-08-14T21:58:06.3040205Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3040428Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3040679Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:06.3041069Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:06.3041406Z return mod(**inputs) 2025-08-14T21:58:06.3042138Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T21:58:06.3042576Z outputs = self.roberta( 2025-08-14T21:58:06.3042982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:06.3043426Z encoder_outputs = self.encoder( 2025-08-14T21:58:06.3043844Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:06.3044260Z layer_outputs = layer_module( 2025-08-14T21:58:06.3044689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:06.3045073Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:06.3045492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T21:58:06.3045921Z layer_output = apply_chunking_to_forward( 2025-08-14T21:58:06.3046340Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:58:06.3046759Z return forward_fn(*input_tensors) 2025-08-14T21:58:06.3047210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T21:58:06.3047714Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:58:06.3048177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T21:58:06.3048639Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:58:06.3049044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:58:06.3049400Z return self.act(input) 2025-08-14T21:58:06.3049528Z 2025-08-14T21:58:06.3049615Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3049842Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3050064Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3050277Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3050496Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3050712Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3050920Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3051137Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3051387Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:06.3051759Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:06.3052088Z return mod(**inputs) 2025-08-14T21:58:06.3052461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T21:58:06.3052855Z outputs = self.roberta( 2025-08-14T21:58:06.3053224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:06.3053618Z encoder_outputs = self.encoder( 2025-08-14T21:58:06.3054011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:06.3054406Z layer_outputs = layer_module( 2025-08-14T21:58:06.3054745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:06.3055104Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:06.3055506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T21:58:06.3055904Z self_attention_outputs = self.attention( 2025-08-14T21:58:06.3056317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:06.3056697Z return func(*args, **kwargs) 2025-08-14T21:58:06.3057090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T21:58:06.3057494Z self_outputs = self.self( 2025-08-14T21:58:06.3057852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:06.3058223Z return func(*args, **kwargs) 2025-08-14T21:58:06.3058597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T21:58:06.3059106Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:06.3059300Z 2025-08-14T21:58:06.3059379Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3059685Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3059925Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:06.3060318Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:06.3060667Z return mod(**inputs) 2025-08-14T21:58:06.3061058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T21:58:06.3061473Z outputs = self.roberta( 2025-08-14T21:58:06.3061850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:06.3062250Z encoder_outputs = self.encoder( 2025-08-14T21:58:06.3062637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:06.3063036Z layer_outputs = layer_module( 2025-08-14T21:58:06.3063388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:06.3063752Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:06.3064144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T21:58:06.3064551Z layer_output = apply_chunking_to_forward( 2025-08-14T21:58:06.3064956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:58:06.3065344Z return forward_fn(*input_tensors) 2025-08-14T21:58:06.3065773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T21:58:06.3066266Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:58:06.3066714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T21:58:06.3067117Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:58:06.3067483Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:58:06.3067812Z return self.act(input) 2025-08-14T21:58:06.3067918Z 2025-08-14T21:58:06.3068000Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3068197Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3068397Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3068596Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3068786Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3068981Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3069176Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3069368Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3069596Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:06.3069943Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:06.3070246Z return mod(**inputs) 2025-08-14T21:58:06.3070627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T21:58:06.3071003Z outputs = self.roberta( 2025-08-14T21:58:06.3071362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:06.3071751Z encoder_outputs = self.encoder( 2025-08-14T21:58:06.3072125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:06.3072506Z layer_outputs = layer_module( 2025-08-14T21:58:06.3072873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:06.3073210Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:06.3073588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T21:58:06.3073977Z self_attention_outputs = self.attention( 2025-08-14T21:58:06.3074337Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:06.3074691Z return func(*args, **kwargs) 2025-08-14T21:58:06.3075058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T21:58:06.3075430Z self_outputs = self.self( 2025-08-14T21:58:06.3075767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:06.3076126Z return func(*args, **kwargs) 2025-08-14T21:58:06.3076498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T21:58:06.3076937Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:06.3077120Z 2025-08-14T21:58:06.3077196Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3077396Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3077620Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:06.3077956Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:06.3078264Z return mod(**inputs) 2025-08-14T21:58:06.3078616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T21:58:06.3078980Z outputs = self.roberta( 2025-08-14T21:58:06.3079340Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:06.3079715Z encoder_outputs = self.encoder( 2025-08-14T21:58:06.3080084Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:06.3080449Z layer_outputs = layer_module( 2025-08-14T21:58:06.3080783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:06.3081128Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:06.3081503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T21:58:06.3081884Z layer_output = apply_chunking_to_forward( 2025-08-14T21:58:06.3082269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:58:06.3082644Z return forward_fn(*input_tensors) 2025-08-14T21:58:06.3083042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T21:58:06.3083489Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:58:06.3083926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T21:58:06.3084339Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:58:06.3084695Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:58:06.3085040Z return self.act(input) 2025-08-14T21:58:06.3085148Z 2025-08-14T21:58:06.3085234Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3085429Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3085633Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3085834Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3086069Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3086260Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3086456Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3086653Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3086870Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:06.3087231Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:06.3087545Z return mod(**inputs) 2025-08-14T21:58:06.3087893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T21:58:06.3088268Z outputs = self.roberta( 2025-08-14T21:58:06.3088626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:06.3089002Z encoder_outputs = self.encoder( 2025-08-14T21:58:06.3089371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:06.3089745Z layer_outputs = layer_module( 2025-08-14T21:58:06.3090079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:06.3090423Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:06.3090796Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T21:58:06.3091181Z self_attention_outputs = self.attention( 2025-08-14T21:58:06.3091546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:06.3091895Z return func(*args, **kwargs) 2025-08-14T21:58:06.3092264Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T21:58:06.3092636Z self_outputs = self.self( 2025-08-14T21:58:06.3092979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:06.3093324Z return func(*args, **kwargs) 2025-08-14T21:58:06.3093690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T21:58:06.3094120Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:06.3094295Z 2025-08-14T21:58:06.3094370Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3094667Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3094894Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:06.3095243Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:06.3095548Z return mod(**inputs) 2025-08-14T21:58:06.3095909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T21:58:06.3096288Z outputs = self.roberta( 2025-08-14T21:58:06.3096646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:06.3097020Z encoder_outputs = self.encoder( 2025-08-14T21:58:06.3097419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:06.3097800Z layer_outputs = layer_module( 2025-08-14T21:58:06.3098132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:06.3098495Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:06.3098874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T21:58:06.3099260Z layer_output = apply_chunking_to_forward( 2025-08-14T21:58:06.3099787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:58:06.3100203Z return forward_fn(*input_tensors) 2025-08-14T21:58:06.3100665Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T21:58:06.3101142Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:58:06.3101592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T21:58:06.3102017Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:58:06.3102390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:58:06.3102716Z return self.act(input) 2025-08-14T21:58:06.3102835Z 2025-08-14T21:58:06.3102915Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3103129Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3103334Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3103529Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3103730Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3103930Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3104124Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3104327Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3104555Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:06.3104899Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:06.3105216Z return mod(**inputs) 2025-08-14T21:58:06.3105576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T21:58:06.3105956Z outputs = self.roberta( 2025-08-14T21:58:06.3106311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:06.3106692Z encoder_outputs = self.encoder( 2025-08-14T21:58:06.3107068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:06.3107442Z layer_outputs = layer_module( 2025-08-14T21:58:06.3107779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:06.3108125Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:06.3108512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T21:58:06.3108897Z self_attention_outputs = self.attention( 2025-08-14T21:58:06.3109268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:06.3109632Z return func(*args, **kwargs) 2025-08-14T21:58:06.3109994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T21:58:06.3110370Z self_outputs = self.self( 2025-08-14T21:58:06.3110745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:06.3111103Z return func(*args, **kwargs) 2025-08-14T21:58:06.3111461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T21:58:06.3111916Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:06.3112094Z 2025-08-14T21:58:06.3112178Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3112382Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3112604Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:06.3112986Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:06.3113315Z return mod(**inputs) 2025-08-14T21:58:06.3113668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T21:58:06.3114044Z outputs = self.roberta( 2025-08-14T21:58:06.3114402Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:06.3114778Z encoder_outputs = self.encoder( 2025-08-14T21:58:06.3115141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:06.3115515Z layer_outputs = layer_module( 2025-08-14T21:58:06.3115850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:06.3116190Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:06.3116569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T21:58:06.3116955Z layer_output = apply_chunking_to_forward( 2025-08-14T21:58:06.3117336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:58:06.3117703Z return forward_fn(*input_tensors) 2025-08-14T21:58:06.3118107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T21:58:06.3118558Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:58:06.3118969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T21:58:06.3119378Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:58:06.3119741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:58:06.3120071Z return self.act(input) 2025-08-14T21:58:06.3120175Z 2025-08-14T21:58:06.3120250Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3120452Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3120650Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3120840Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3121035Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3121229Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3121423Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3121611Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3121833Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:06.3122179Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:06.3122481Z return mod(**inputs) 2025-08-14T21:58:06.3122835Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T21:58:06.3123208Z outputs = self.roberta( 2025-08-14T21:58:06.3123567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:06.3123938Z encoder_outputs = self.encoder( 2025-08-14T21:58:06.3124336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:06.3124714Z layer_outputs = layer_module( 2025-08-14T21:58:06.3125037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:06.3125394Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:06.3125774Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T21:58:06.3126156Z self_attention_outputs = self.attention( 2025-08-14T21:58:06.3126548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:06.3126909Z return func(*args, **kwargs) 2025-08-14T21:58:06.3127273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T21:58:06.3127639Z self_outputs = self.self( 2025-08-14T21:58:06.3127981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:06.3128333Z return func(*args, **kwargs) 2025-08-14T21:58:06.3128698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T21:58:06.3129121Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:06.3129303Z 2025-08-14T21:58:06.3129380Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3129586Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3129808Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:06.3130155Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:06.3130466Z return mod(**inputs) 2025-08-14T21:58:06.3130821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T21:58:06.3131184Z outputs = self.roberta( 2025-08-14T21:58:06.3131551Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:06.3131928Z encoder_outputs = self.encoder( 2025-08-14T21:58:06.3132296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:06.3132670Z layer_outputs = layer_module( 2025-08-14T21:58:06.3133009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:06.3133363Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:06.3133744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T21:58:06.3134145Z layer_output = apply_chunking_to_forward( 2025-08-14T21:58:06.3134537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:58:06.3134921Z return forward_fn(*input_tensors) 2025-08-14T21:58:06.3135327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T21:58:06.3135795Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:58:06.3136224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T21:58:06.3136651Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:58:06.3137013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:58:06.3137345Z return self.act(input) 2025-08-14T21:58:06.3137451Z 2025-08-14T21:58:06.3137563Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3137766Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3137970Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3138171Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3138382Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3138582Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3138784Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3138984Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3139205Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:06.3139686Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:06.3140031Z return mod(**inputs) 2025-08-14T21:58:06.3140401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T21:58:06.3140793Z outputs = self.roberta( 2025-08-14T21:58:06.3141182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:06.3141568Z encoder_outputs = self.encoder( 2025-08-14T21:58:06.3142131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:06.3142530Z layer_outputs = layer_module( 2025-08-14T21:58:06.3142870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:06.3143218Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:06.3143610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T21:58:06.3144011Z self_attention_outputs = self.attention( 2025-08-14T21:58:06.3144387Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:06.3144750Z return func(*args, **kwargs) 2025-08-14T21:58:06.3145124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T21:58:06.3145507Z self_outputs = self.self( 2025-08-14T21:58:06.3145853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:06.3146219Z return func(*args, **kwargs) 2025-08-14T21:58:06.3146589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T21:58:06.3147031Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:06.3147210Z 2025-08-14T21:58:06.3147286Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3147492Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3147726Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:06.3148072Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:06.3148388Z return mod(**inputs) 2025-08-14T21:58:06.3148748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T21:58:06.3149130Z outputs = self.roberta( 2025-08-14T21:58:06.3149489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:06.3149873Z encoder_outputs = self.encoder( 2025-08-14T21:58:06.3150254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:06.3150635Z layer_outputs = layer_module( 2025-08-14T21:58:06.3150969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:06.3151322Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:06.3151787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T21:58:06.3152190Z layer_output = apply_chunking_to_forward( 2025-08-14T21:58:06.3152616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:58:06.3153007Z return forward_fn(*input_tensors) 2025-08-14T21:58:06.3153424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T21:58:06.3153925Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:58:06.3154361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T21:58:06.3154780Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:58:06.3155154Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:58:06.3155476Z return self.act(input) 2025-08-14T21:58:06.3155590Z 2025-08-14T21:58:06.3155667Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3155873Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3156068Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3156267Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3156464Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3156658Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3156856Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3157060Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3157289Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:06.3157636Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:06.3157957Z return mod(**inputs) 2025-08-14T21:58:06.3158325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T21:58:06.3158700Z outputs = self.roberta( 2025-08-14T21:58:06.3159068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:06.3159457Z encoder_outputs = self.encoder( 2025-08-14T21:58:06.3159839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:06.3160222Z layer_outputs = layer_module( 2025-08-14T21:58:06.3160553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:06.3160900Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:06.3161270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T21:58:06.3161659Z self_attention_outputs = self.attention( 2025-08-14T21:58:06.3162023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:06.3162381Z return func(*args, **kwargs) 2025-08-14T21:58:06.3162741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T21:58:06.3163112Z self_outputs = self.self( 2025-08-14T21:58:06.3163455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:06.3163809Z return func(*args, **kwargs) 2025-08-14T21:58:06.3164172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T21:58:06.3164597Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:06.3164772Z 2025-08-14T21:58:06.3164854Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3165070Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3165298Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:06.3165641Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:06.3165976Z return mod(**inputs) 2025-08-14T21:58:06.3166332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T21:58:06.3166704Z outputs = self.roberta( 2025-08-14T21:58:06.3167080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:06.3167466Z encoder_outputs = self.encoder( 2025-08-14T21:58:06.3167835Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:06.3168208Z layer_outputs = layer_module( 2025-08-14T21:58:06.3168539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:06.3168875Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:06.3169251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T21:58:06.3169638Z layer_output = apply_chunking_to_forward( 2025-08-14T21:58:06.3170009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:58:06.3170382Z return forward_fn(*input_tensors) 2025-08-14T21:58:06.3170787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T21:58:06.3171235Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:58:06.3171642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T21:58:06.3172050Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:58:06.3172410Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:58:06.3172735Z return self.act(input) 2025-08-14T21:58:06.3172839Z 2025-08-14T21:58:06.3172912Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3173110Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3173305Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3173493Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3173694Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3173891Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3174078Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3174274Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3174500Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:06.3174852Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:06.3175155Z return mod(**inputs) 2025-08-14T21:58:06.3175506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T21:58:06.3175880Z outputs = self.roberta( 2025-08-14T21:58:06.3176225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:06.3176600Z encoder_outputs = self.encoder( 2025-08-14T21:58:06.3176972Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:06.3177344Z layer_outputs = layer_module( 2025-08-14T21:58:06.3177664Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:06.3178010Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:06.3178410Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T21:58:06.3178789Z self_attention_outputs = self.attention( 2025-08-14T21:58:06.3179174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:06.3179639Z return func(*args, **kwargs) 2025-08-14T21:58:06.3180030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T21:58:06.3180420Z self_outputs = self.self( 2025-08-14T21:58:06.3180836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:06.3181205Z return func(*args, **kwargs) 2025-08-14T21:58:06.3181576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T21:58:06.3182032Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:06.3182224Z 2025-08-14T21:58:06.3182301Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3182511Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3182740Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:06.3183098Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:06.3183426Z return mod(**inputs) 2025-08-14T21:58:06.3183785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T21:58:06.3184180Z outputs = self.roberta( 2025-08-14T21:58:06.3184553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:06.3184942Z encoder_outputs = self.encoder( 2025-08-14T21:58:06.3185319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:06.3185708Z layer_outputs = layer_module( 2025-08-14T21:58:06.3186050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:06.3186405Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:06.3186784Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T21:58:06.3187181Z layer_output = apply_chunking_to_forward( 2025-08-14T21:58:06.3187575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:58:06.3187954Z return forward_fn(*input_tensors) 2025-08-14T21:58:06.3188370Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T21:58:06.3188837Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:58:06.3189265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T21:58:06.3189695Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:58:06.3190084Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:58:06.3190426Z return self.act(input) 2025-08-14T21:58:06.3190540Z 2025-08-14T21:58:06.3190634Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3190834Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3191041Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3191243Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3191437Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3191643Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3191844Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3192070Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3192303Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:06.3192663Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:06.3193009Z return mod(**inputs) 2025-08-14T21:58:06.3193372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T21:58:06.3193757Z outputs = self.roberta( 2025-08-14T21:58:06.3194146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:06.3194550Z encoder_outputs = self.encoder( 2025-08-14T21:58:06.3194933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:06.3195317Z layer_outputs = layer_module( 2025-08-14T21:58:06.3195657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:06.3196005Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:06.3196392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T21:58:06.3196788Z self_attention_outputs = self.attention( 2025-08-14T21:58:06.3197157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:06.3197526Z return func(*args, **kwargs) 2025-08-14T21:58:06.3197910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T21:58:06.3198290Z self_outputs = self.self( 2025-08-14T21:58:06.3198639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:06.3199007Z return func(*args, **kwargs) 2025-08-14T21:58:06.3199381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T21:58:06.3199812Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:06.3200005Z 2025-08-14T21:58:06.3200085Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3200296Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3200550Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:06.3200894Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:06.3201216Z return mod(**inputs) 2025-08-14T21:58:06.3201577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T21:58:06.3201959Z outputs = self.roberta( 2025-08-14T21:58:06.3202316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:06.3202698Z encoder_outputs = self.encoder( 2025-08-14T21:58:06.3203085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:06.3203469Z layer_outputs = layer_module( 2025-08-14T21:58:06.3203820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:06.3204178Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:06.3204572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T21:58:06.3204970Z layer_output = apply_chunking_to_forward( 2025-08-14T21:58:06.3205369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:58:06.3205790Z return forward_fn(*input_tensors) 2025-08-14T21:58:06.3206216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T21:58:06.3206693Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:58:06.3207150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T21:58:06.3207583Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:58:06.3207954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:58:06.3208333Z return self.act(input) 2025-08-14T21:58:06.3208446Z 2025-08-14T21:58:06.3208535Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3208748Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3208951Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3209159Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3209369Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3209570Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3209778Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3209985Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3210213Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:06.3210577Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:06.3210910Z return mod(**inputs) 2025-08-14T21:58:06.3211279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T21:58:06.3211671Z outputs = self.roberta( 2025-08-14T21:58:06.3212043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:06.3212435Z encoder_outputs = self.encoder( 2025-08-14T21:58:06.3212817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:06.3213210Z layer_outputs = layer_module( 2025-08-14T21:58:06.3213564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:06.3213922Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:06.3214303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T21:58:06.3214695Z self_attention_outputs = self.attention( 2025-08-14T21:58:06.3215074Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:06.3215429Z return func(*args, **kwargs) 2025-08-14T21:58:06.3215805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T21:58:06.3216194Z self_outputs = self.self( 2025-08-14T21:58:06.3216545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:06.3216899Z return func(*args, **kwargs) 2025-08-14T21:58:06.3217279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T21:58:06.3217716Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:06.3217897Z 2025-08-14T21:58:06.3217982Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3218191Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3218432Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:06.3218793Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:06.3219110Z return mod(**inputs) 2025-08-14T21:58:06.3219616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T21:58:06.3220041Z outputs = self.roberta( 2025-08-14T21:58:06.3220438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:06.3220894Z encoder_outputs = self.encoder( 2025-08-14T21:58:06.3221299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:06.3221713Z layer_outputs = layer_module( 2025-08-14T21:58:06.3222060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:06.3222433Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:06.3222820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T21:58:06.3223215Z layer_output = apply_chunking_to_forward( 2025-08-14T21:58:06.3223598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:58:06.3223978Z return forward_fn(*input_tensors) 2025-08-14T21:58:06.3224389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T21:58:06.3224841Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:58:06.3225270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T21:58:06.3225697Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:58:06.3226070Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:58:06.3226396Z return self.act(input) 2025-08-14T21:58:06.3226512Z 2025-08-14T21:58:06.3226591Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3226800Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3226995Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3227197Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3227399Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3227601Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3227798Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3227996Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3228223Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:06.3228568Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:06.3228891Z return mod(**inputs) 2025-08-14T21:58:06.3229254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T21:58:06.3229633Z outputs = self.roberta( 2025-08-14T21:58:06.3229998Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:06.3230381Z encoder_outputs = self.encoder( 2025-08-14T21:58:06.3230760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:06.3231135Z layer_outputs = layer_module( 2025-08-14T21:58:06.3231474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:06.3231830Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:06.3232217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T21:58:06.3232604Z self_attention_outputs = self.attention( 2025-08-14T21:58:06.3232974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:06.3233364Z return func(*args, **kwargs) 2025-08-14T21:58:06.3233728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T21:58:06.3234104Z self_outputs = self.self( 2025-08-14T21:58:06.3234486Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:06.3234848Z return func(*args, **kwargs) 2025-08-14T21:58:06.3235209Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T21:58:06.3235644Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:06.3235864Z 2025-08-14T21:58:06.3235950Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3236157Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3236384Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:06.3236739Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:06.3237060Z return mod(**inputs) 2025-08-14T21:58:06.3237419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-08-14T21:58:06.3237808Z outputs = self.roberta( 2025-08-14T21:58:06.3238179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:06.3238565Z encoder_outputs = self.encoder( 2025-08-14T21:58:06.3238942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:06.3239334Z layer_outputs = layer_module( 2025-08-14T21:58:06.3239678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:06.3240030Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:06.3240427Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T21:58:06.3240824Z layer_output = apply_chunking_to_forward( 2025-08-14T21:58:06.3241221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:58:06.3241599Z return forward_fn(*input_tensors) 2025-08-14T21:58:06.3242162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T21:58:06.3242635Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:58:06.3243066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T21:58:06.3243492Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:58:06.3243869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:58:06.3244209Z return self.act(input) 2025-08-14T21:58:06.3244318Z 2025-08-14T21:58:06.3244395Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3244609Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3244819Z cudagraph partition due to non gpu ops 2025-08-14T21:58:06.3245046Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:06.3245404Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:06.3245727Z return mod(**inputs) 2025-08-14T21:58:06.3246098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1022, in forward 2025-08-14T21:58:06.3246481Z lm_loss = self.loss_function( 2025-08-14T21:58:06.3246846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/loss/loss_utils.py", line 67, in ForCausalLMLoss 2025-08-14T21:58:06.3247396Z loss = fixed_cross_entropy(logits, shift_labels, num_items_in_batch, ignore_index, **kwargs) 2025-08-14T21:58:06.3247872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/loss/loss_utils.py", line 36, in fixed_cross_entropy 2025-08-14T21:58:06.3248363Z loss = nn.functional.cross_entropy(source, target, ignore_index=ignore_index, reduction=reduction) 2025-08-14T21:58:06.3248686Z 2025-08-14T21:58:15.9841134Z Compilation time (from dynamo_timed): 18.204528374 2025-08-14T21:58:15.9978891Z pass 2025-08-14T21:58:15.9979347Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:58:15.9980801Z TIMING: _recursive_pre_grad_passes:0.03674 _recursive_joint_graph_passes:0.38641 _recursive_post_grad_passes:0.08029 async_compile.wait:0.90541 code_gen:9.26118 inductor_compile:10.96933 backend_compile:15.32295 gc:0.00117 entire_frame_compile:18.20453 total_wall_time:18.20453 2025-08-14T21:58:15.9981734Z STATS: call_* op count: 303 | FakeTensorMode.__torch_dispatch__:24314 | FakeTensor.__torch_dispatch__:3923 | ProxyTorchDispatchMode.__torch_dispatch__:5359 2025-08-14T21:58:15.9982239Z Dynamo produced 1 graphs covering 303 ops with 0 graph breaks (0 unique) 2025-08-14T21:58:21.9161454Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:58:21.9167329Z from pkg_resources import resource_filename 2025-08-14T21:58:22.5322694Z 2025-08-14T21:58:23.7365490Z loading model: 0it [00:00, ?it/s]We strongly recommend passing in an `attention_mask` since your input_ids may be padded. See https://huggingface.co/docs/transformers/troubleshooting#incorrect-output-when-padding-tokens-arent-masked. 2025-08-14T21:58:23.7366593Z You may ignore this warning if your `pad_token_id` (0) is identical to the `bos_token_id` (0), `eos_token_id` (2), or the `sep_token_id` (None), and your input is not padded. 2025-08-14T21:58:23.7367557Z WARNING:transformers.modeling_utils:We strongly recommend passing in an `attention_mask` since your input_ids may be padded. See https://huggingface.co/docs/transformers/troubleshooting#incorrect-output-when-padding-tokens-arent-masked. 2025-08-14T21:58:23.7368540Z You may ignore this warning if your `pad_token_id` (0) is identical to the `bos_token_id` (0), `eos_token_id` (2), or the `sep_token_id` (None), and your input is not padded. 2025-08-14T21:58:23.8733763Z 2025-08-14T21:58:23.8734585Z loading model: 0it [00:01, ?it/s] 2025-08-14T21:58:23.8748777Z cpu eval RobertaForQuestionAnswering 2025-08-14T21:58:24.3021966Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:58:24.5212714Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:58:24.7600940Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:58:34.7492208Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:34.7492723Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:34.7493127Z return mod(**inputs) 2025-08-14T21:58:34.7493568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T21:58:34.7494020Z outputs = self.roberta( 2025-08-14T21:58:34.7494451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 826, in forward 2025-08-14T21:58:34.7494904Z embedding_output = self.embeddings( 2025-08-14T21:58:34.7495346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 89, in forward 2025-08-14T21:58:34.7496260Z position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx, past_key_values_length) 2025-08-14T21:58:34.7496913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1576, in create_position_ids_from_input_ids 2025-08-14T21:58:34.7497512Z mask = input_ids.ne(padding_idx).int() 2025-08-14T21:58:34.7497676Z 2025-08-14T21:58:34.7497768Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7497999Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7498208Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7498427Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7498777Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7499006Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7499221Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7499444Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7499806Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7500028Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7500277Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7500498Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7500758Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:34.7501179Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:34.7501535Z return mod(**inputs) 2025-08-14T21:58:34.7501968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T21:58:34.7502544Z outputs = self.roberta( 2025-08-14T21:58:34.7502967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 826, in forward 2025-08-14T21:58:34.7503421Z embedding_output = self.embeddings( 2025-08-14T21:58:34.7503861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 89, in forward 2025-08-14T21:58:34.7504438Z position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx, past_key_values_length) 2025-08-14T21:58:34.7505104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1577, in create_position_ids_from_input_ids 2025-08-14T21:58:34.7505743Z incremental_indices = (torch.cumsum(mask, dim=1).type_as(mask) + past_key_values_length) * mask 2025-08-14T21:58:34.7506074Z 2025-08-14T21:58:34.7506203Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:34.7506596Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:34.7506946Z return mod(**inputs) 2025-08-14T21:58:34.7507348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T21:58:34.7507766Z outputs = self.roberta( 2025-08-14T21:58:34.7508170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 826, in forward 2025-08-14T21:58:34.7508608Z embedding_output = self.embeddings( 2025-08-14T21:58:34.7509030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 89, in forward 2025-08-14T21:58:34.7509599Z position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx, past_key_values_length) 2025-08-14T21:58:34.7510229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1577, in create_position_ids_from_input_ids 2025-08-14T21:58:34.7510852Z incremental_indices = (torch.cumsum(mask, dim=1).type_as(mask) + past_key_values_length) * mask 2025-08-14T21:58:34.7511108Z 2025-08-14T21:58:34.7511210Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7511447Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7511665Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7511916Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7512146Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7512371Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7512600Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7512879Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:34.7513272Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:34.7513617Z return mod(**inputs) 2025-08-14T21:58:34.7514018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T21:58:34.7514486Z outputs = self.roberta( 2025-08-14T21:58:34.7514883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:34.7515301Z encoder_outputs = self.encoder( 2025-08-14T21:58:34.7515714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:34.7516135Z layer_outputs = layer_module( 2025-08-14T21:58:34.7516522Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:34.7516929Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:34.7517385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T21:58:34.7517835Z self_attention_outputs = self.attention( 2025-08-14T21:58:34.7518265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:34.7518683Z return func(*args, **kwargs) 2025-08-14T21:58:34.7519092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T21:58:34.7519527Z self_outputs = self.self( 2025-08-14T21:58:34.7519919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:34.7520330Z return func(*args, **kwargs) 2025-08-14T21:58:34.7520771Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T21:58:34.7521269Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:34.7521471Z 2025-08-14T21:58:34.7521558Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7521788Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7522049Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:34.7522438Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:34.7522793Z return mod(**inputs) 2025-08-14T21:58:34.7523221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T21:58:34.7523656Z outputs = self.roberta( 2025-08-14T21:58:34.7524058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:34.7524492Z encoder_outputs = self.encoder( 2025-08-14T21:58:34.7524927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:34.7525348Z layer_outputs = layer_module( 2025-08-14T21:58:34.7525729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:34.7526128Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:34.7526579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T21:58:34.7527017Z layer_output = apply_chunking_to_forward( 2025-08-14T21:58:34.7527485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:58:34.7527923Z return forward_fn(*input_tensors) 2025-08-14T21:58:34.7528378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T21:58:34.7528929Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:58:34.7529409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T21:58:34.7529925Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:58:34.7530347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:58:34.7530836Z return self.act(input) 2025-08-14T21:58:34.7530969Z 2025-08-14T21:58:34.7531054Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7531288Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7531506Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7531733Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7531956Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7532175Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7532400Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7532620Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7532863Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:34.7533255Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:34.7533606Z return mod(**inputs) 2025-08-14T21:58:34.7534006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T21:58:34.7534416Z outputs = self.roberta( 2025-08-14T21:58:34.7534817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:34.7535236Z encoder_outputs = self.encoder( 2025-08-14T21:58:34.7535642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:34.7536060Z layer_outputs = layer_module( 2025-08-14T21:58:34.7536430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:34.7536812Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:34.7537227Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T21:58:34.7537656Z self_attention_outputs = self.attention( 2025-08-14T21:58:34.7538065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:34.7538463Z return func(*args, **kwargs) 2025-08-14T21:58:34.7538864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T21:58:34.7539279Z self_outputs = self.self( 2025-08-14T21:58:34.7539748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:34.7540165Z return func(*args, **kwargs) 2025-08-14T21:58:34.7540588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T21:58:34.7541097Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:34.7541300Z 2025-08-14T21:58:34.7541395Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7541614Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7542067Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:34.7542546Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:34.7542888Z return mod(**inputs) 2025-08-14T21:58:34.7543291Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T21:58:34.7543747Z outputs = self.roberta( 2025-08-14T21:58:34.7544146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:34.7544565Z encoder_outputs = self.encoder( 2025-08-14T21:58:34.7544981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:34.7545468Z layer_outputs = layer_module( 2025-08-14T21:58:34.7545839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:34.7546231Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:34.7546675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T21:58:34.7547108Z layer_output = apply_chunking_to_forward( 2025-08-14T21:58:34.7547531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:58:34.7547953Z return forward_fn(*input_tensors) 2025-08-14T21:58:34.7548408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T21:58:34.7548919Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:58:34.7549405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T21:58:34.7549871Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:58:34.7550344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:58:34.7550698Z return self.act(input) 2025-08-14T21:58:34.7550824Z 2025-08-14T21:58:34.7550908Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7551132Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7551354Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7551564Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7551786Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7551991Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7552188Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7552396Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7552639Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:34.7553012Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:34.7553362Z return mod(**inputs) 2025-08-14T21:58:34.7553764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T21:58:34.7554187Z outputs = self.roberta( 2025-08-14T21:58:34.7554552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:34.7554947Z encoder_outputs = self.encoder( 2025-08-14T21:58:34.7555336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:34.7555728Z layer_outputs = layer_module( 2025-08-14T21:58:34.7556067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:34.7556429Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:34.7556827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T21:58:34.7557221Z self_attention_outputs = self.attention( 2025-08-14T21:58:34.7557627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:34.7558009Z return func(*args, **kwargs) 2025-08-14T21:58:34.7558399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T21:58:34.7558811Z self_outputs = self.self( 2025-08-14T21:58:34.7559177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:34.7559551Z return func(*args, **kwargs) 2025-08-14T21:58:34.7559982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T21:58:34.7560445Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:34.7560637Z 2025-08-14T21:58:34.7560719Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7560932Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7561163Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:34.7561528Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:34.7561855Z return mod(**inputs) 2025-08-14T21:58:34.7562224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T21:58:34.7562623Z outputs = self.roberta( 2025-08-14T21:58:34.7563001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:34.7563402Z encoder_outputs = self.encoder( 2025-08-14T21:58:34.7563791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:34.7564185Z layer_outputs = layer_module( 2025-08-14T21:58:34.7564538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:34.7564896Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:34.7565292Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T21:58:34.7565703Z layer_output = apply_chunking_to_forward( 2025-08-14T21:58:34.7566109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:58:34.7566493Z return forward_fn(*input_tensors) 2025-08-14T21:58:34.7566919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T21:58:34.7567398Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:58:34.7567841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T21:58:34.7568268Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:58:34.7568654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:58:34.7569006Z return self.act(input) 2025-08-14T21:58:34.7569118Z 2025-08-14T21:58:34.7569206Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7569412Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7569624Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7569836Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7570041Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7570251Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7570460Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7570660Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7570897Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:34.7571283Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:34.7571604Z return mod(**inputs) 2025-08-14T21:58:34.7571992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T21:58:34.7572406Z outputs = self.roberta( 2025-08-14T21:58:34.7572781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:34.7573174Z encoder_outputs = self.encoder( 2025-08-14T21:58:34.7573567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:34.7573994Z layer_outputs = layer_module( 2025-08-14T21:58:34.7574346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:34.7574706Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:34.7575108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T21:58:34.7575510Z self_attention_outputs = self.attention( 2025-08-14T21:58:34.7575896Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:34.7576300Z return func(*args, **kwargs) 2025-08-14T21:58:34.7576705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T21:58:34.7577119Z self_outputs = self.self( 2025-08-14T21:58:34.7577506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:34.7577902Z return func(*args, **kwargs) 2025-08-14T21:58:34.7578496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T21:58:34.7578977Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:34.7579185Z 2025-08-14T21:58:34.7579270Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7579545Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7579811Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:34.7580187Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:34.7580531Z return mod(**inputs) 2025-08-14T21:58:34.7580932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T21:58:34.7581345Z outputs = self.roberta( 2025-08-14T21:58:34.7581742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:34.7582156Z encoder_outputs = self.encoder( 2025-08-14T21:58:34.7582573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:34.7582985Z layer_outputs = layer_module( 2025-08-14T21:58:34.7583357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:34.7583745Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:34.7584165Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T21:58:34.7584600Z layer_output = apply_chunking_to_forward( 2025-08-14T21:58:34.7585025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:58:34.7585447Z return forward_fn(*input_tensors) 2025-08-14T21:58:34.7585889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T21:58:34.7586420Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:58:34.7586887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T21:58:34.7587341Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:58:34.7587766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:58:34.7588127Z return self.act(input) 2025-08-14T21:58:34.7588245Z 2025-08-14T21:58:34.7588338Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7588555Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7588816Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7589041Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7589262Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7589498Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7589714Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7589933Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7590175Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:34.7590565Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:34.7590911Z return mod(**inputs) 2025-08-14T21:58:34.7591302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T21:58:34.7591721Z outputs = self.roberta( 2025-08-14T21:58:34.7592143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:34.7592580Z encoder_outputs = self.encoder( 2025-08-14T21:58:34.7592998Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:34.7593428Z layer_outputs = layer_module( 2025-08-14T21:58:34.7593805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:34.7594198Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:34.7594618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T21:58:34.7595054Z self_attention_outputs = self.attention( 2025-08-14T21:58:34.7595461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:34.7595857Z return func(*args, **kwargs) 2025-08-14T21:58:34.7596275Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T21:58:34.7596699Z self_outputs = self.self( 2025-08-14T21:58:34.7597090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:34.7597487Z return func(*args, **kwargs) 2025-08-14T21:58:34.7597897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T21:58:34.7598370Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:34.7610551Z 2025-08-14T21:58:34.7610712Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7610978Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7611239Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:34.7611652Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:34.7612049Z return mod(**inputs) 2025-08-14T21:58:34.7612500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T21:58:34.7612931Z outputs = self.roberta( 2025-08-14T21:58:34.7613458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:34.7613894Z encoder_outputs = self.encoder( 2025-08-14T21:58:34.7614312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:34.7614790Z layer_outputs = layer_module( 2025-08-14T21:58:34.7615175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:34.7615571Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:34.7616031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T21:58:34.7616505Z layer_output = apply_chunking_to_forward( 2025-08-14T21:58:34.7616934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:58:34.7617351Z return forward_fn(*input_tensors) 2025-08-14T21:58:34.7617797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T21:58:34.7618303Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:58:34.7618777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T21:58:34.7619238Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:58:34.7619757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:58:34.7620151Z return self.act(input) 2025-08-14T21:58:34.7620280Z 2025-08-14T21:58:34.7620380Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7620615Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7620842Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7621066Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7621284Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7621507Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7621729Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7621942Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7622204Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:34.7622601Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:34.7622954Z return mod(**inputs) 2025-08-14T21:58:34.7623351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T21:58:34.7623780Z outputs = self.roberta( 2025-08-14T21:58:34.7624183Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:34.7624607Z encoder_outputs = self.encoder( 2025-08-14T21:58:34.7625031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:34.7625453Z layer_outputs = layer_module( 2025-08-14T21:58:34.7625826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:34.7626208Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:34.7626630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T21:58:34.7627065Z self_attention_outputs = self.attention( 2025-08-14T21:58:34.7627476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:34.7627870Z return func(*args, **kwargs) 2025-08-14T21:58:34.7628278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T21:58:34.7628693Z self_outputs = self.self( 2025-08-14T21:58:34.7629099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:34.7629500Z return func(*args, **kwargs) 2025-08-14T21:58:34.7629910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T21:58:34.7630426Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:34.7630624Z 2025-08-14T21:58:34.7630711Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7630937Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7631237Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:34.7631620Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:34.7631970Z return mod(**inputs) 2025-08-14T21:58:34.7632374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T21:58:34.7632797Z outputs = self.roberta( 2025-08-14T21:58:34.7633191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:34.7633621Z encoder_outputs = self.encoder( 2025-08-14T21:58:34.7634043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:34.7634462Z layer_outputs = layer_module( 2025-08-14T21:58:34.7634841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:34.7635238Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:34.7635669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T21:58:34.7636102Z layer_output = apply_chunking_to_forward( 2025-08-14T21:58:34.7636531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:58:34.7636946Z return forward_fn(*input_tensors) 2025-08-14T21:58:34.7637398Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T21:58:34.7637903Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:58:34.7638379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T21:58:34.7638844Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:58:34.7639243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:58:34.7639610Z return self.act(input) 2025-08-14T21:58:34.7639737Z 2025-08-14T21:58:34.7639822Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7640048Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7640265Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7640485Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7640705Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7640919Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7641139Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7641357Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7641598Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:34.7642277Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:34.7642636Z return mod(**inputs) 2025-08-14T21:58:34.7643045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T21:58:34.7643464Z outputs = self.roberta( 2025-08-14T21:58:34.7643974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:34.7644402Z encoder_outputs = self.encoder( 2025-08-14T21:58:34.7644810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:34.7645271Z layer_outputs = layer_module( 2025-08-14T21:58:34.7645649Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:34.7646050Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:34.7646515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T21:58:34.7646972Z self_attention_outputs = self.attention( 2025-08-14T21:58:34.7647382Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:34.7647785Z return func(*args, **kwargs) 2025-08-14T21:58:34.7648189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T21:58:34.7648611Z self_outputs = self.self( 2025-08-14T21:58:34.7648997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:34.7649369Z return func(*args, **kwargs) 2025-08-14T21:58:34.7649777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T21:58:34.7650260Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:34.7650460Z 2025-08-14T21:58:34.7650557Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7650776Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7651032Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:34.7651431Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:34.7651750Z return mod(**inputs) 2025-08-14T21:58:34.7652132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T21:58:34.7652533Z outputs = self.roberta( 2025-08-14T21:58:34.7652911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:34.7653304Z encoder_outputs = self.encoder( 2025-08-14T21:58:34.7653695Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:34.7654115Z layer_outputs = layer_module( 2025-08-14T21:58:34.7654485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:34.7654879Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:34.7655313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T21:58:34.7655758Z layer_output = apply_chunking_to_forward( 2025-08-14T21:58:34.7656177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:58:34.7656594Z return forward_fn(*input_tensors) 2025-08-14T21:58:34.7657050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T21:58:34.7657563Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:58:34.7658035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T21:58:34.7658496Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:58:34.7658904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:58:34.7659291Z return self.act(input) 2025-08-14T21:58:34.7659423Z 2025-08-14T21:58:34.7659582Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7659816Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7660074Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7660295Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7660525Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7660758Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7660976Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7661204Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7661514Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:34.7661896Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:34.7662248Z return mod(**inputs) 2025-08-14T21:58:34.7662651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T21:58:34.7663075Z outputs = self.roberta( 2025-08-14T21:58:34.7663466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:34.7663892Z encoder_outputs = self.encoder( 2025-08-14T21:58:34.7664309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:34.7664718Z layer_outputs = layer_module( 2025-08-14T21:58:34.7665094Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:34.7665484Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:34.7665908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T21:58:34.7666341Z self_attention_outputs = self.attention( 2025-08-14T21:58:34.7666750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:34.7667149Z return func(*args, **kwargs) 2025-08-14T21:58:34.7667556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T21:58:34.7667966Z self_outputs = self.self( 2025-08-14T21:58:34.7668356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:34.7668763Z return func(*args, **kwargs) 2025-08-14T21:58:34.7669165Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T21:58:34.7669646Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:34.7669854Z 2025-08-14T21:58:34.7669939Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7670165Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7670415Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:34.7670805Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:34.7671155Z return mod(**inputs) 2025-08-14T21:58:34.7671550Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T21:58:34.7671981Z outputs = self.roberta( 2025-08-14T21:58:34.7672378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:34.7672797Z encoder_outputs = self.encoder( 2025-08-14T21:58:34.7673204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:34.7673631Z layer_outputs = layer_module( 2025-08-14T21:58:34.7674022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:34.7674411Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:34.7674846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T21:58:34.7675301Z layer_output = apply_chunking_to_forward( 2025-08-14T21:58:34.7675705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:58:34.7676087Z return forward_fn(*input_tensors) 2025-08-14T21:58:34.7676542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T21:58:34.7677062Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:58:34.7677527Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T21:58:34.7677963Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:58:34.7678348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:58:34.7678729Z return self.act(input) 2025-08-14T21:58:34.7678883Z 2025-08-14T21:58:34.7678975Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7679179Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7679395Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7679612Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7679823Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7680043Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7680267Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7680480Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7680734Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:34.7681128Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:34.7681473Z return mod(**inputs) 2025-08-14T21:58:34.7681845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T21:58:34.7682250Z outputs = self.roberta( 2025-08-14T21:58:34.7682630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:34.7683017Z encoder_outputs = self.encoder( 2025-08-14T21:58:34.7683412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:34.7683809Z layer_outputs = layer_module( 2025-08-14T21:58:34.7684163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:34.7684516Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:34.7684917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T21:58:34.7685321Z self_attention_outputs = self.attention( 2025-08-14T21:58:34.7685698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:34.7686080Z return func(*args, **kwargs) 2025-08-14T21:58:34.7686465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T21:58:34.7686856Z self_outputs = self.self( 2025-08-14T21:58:34.7687211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:34.7687590Z return func(*args, **kwargs) 2025-08-14T21:58:34.7687977Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T21:58:34.7688468Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:34.7688657Z 2025-08-14T21:58:34.7688738Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7688952Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7689191Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:34.7689570Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:34.7689896Z return mod(**inputs) 2025-08-14T21:58:34.7690273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T21:58:34.7690752Z outputs = self.roberta( 2025-08-14T21:58:34.7691140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:34.7691543Z encoder_outputs = self.encoder( 2025-08-14T21:58:34.7691946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:34.7692339Z layer_outputs = layer_module( 2025-08-14T21:58:34.7692691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:34.7693059Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:34.7693463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T21:58:34.7693868Z layer_output = apply_chunking_to_forward( 2025-08-14T21:58:34.7694280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:58:34.7694681Z return forward_fn(*input_tensors) 2025-08-14T21:58:34.7695106Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T21:58:34.7695588Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:58:34.7696035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T21:58:34.7696470Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:58:34.7696854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:58:34.7697200Z return self.act(input) 2025-08-14T21:58:34.7697311Z 2025-08-14T21:58:34.7697399Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7697611Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7697820Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7698032Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7698242Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7698445Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7698663Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7698881Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7699130Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:34.7699643Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:34.7700014Z return mod(**inputs) 2025-08-14T21:58:34.7700431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T21:58:34.7700867Z outputs = self.roberta( 2025-08-14T21:58:34.7701270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:34.7701709Z encoder_outputs = self.encoder( 2025-08-14T21:58:34.7702128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:34.7702562Z layer_outputs = layer_module( 2025-08-14T21:58:34.7702976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:34.7703385Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:34.7703823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T21:58:34.7704290Z self_attention_outputs = self.attention( 2025-08-14T21:58:34.7704720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:34.7705121Z return func(*args, **kwargs) 2025-08-14T21:58:34.7705569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T21:58:34.7706033Z self_outputs = self.self( 2025-08-14T21:58:34.7706431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:34.7706837Z return func(*args, **kwargs) 2025-08-14T21:58:34.7707256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T21:58:34.7707749Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:34.7707952Z 2025-08-14T21:58:34.7708044Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7708267Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7708526Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:34.7708918Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:34.7709264Z return mod(**inputs) 2025-08-14T21:58:34.7709640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T21:58:34.7710034Z outputs = self.roberta( 2025-08-14T21:58:34.7710401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:34.7710796Z encoder_outputs = self.encoder( 2025-08-14T21:58:34.7711181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:34.7711574Z layer_outputs = layer_module( 2025-08-14T21:58:34.7711913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:34.7712278Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:34.7712677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T21:58:34.7713084Z layer_output = apply_chunking_to_forward( 2025-08-14T21:58:34.7713484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:58:34.7713876Z return forward_fn(*input_tensors) 2025-08-14T21:58:34.7714305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T21:58:34.7714770Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:58:34.7715208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T21:58:34.7715644Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:58:34.7716026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:58:34.7716360Z return self.act(input) 2025-08-14T21:58:34.7716481Z 2025-08-14T21:58:34.7716562Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7716776Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7716986Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7717184Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7717390Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7717631Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7717834Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7718040Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7718274Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:34.7718671Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:34.7719002Z return mod(**inputs) 2025-08-14T21:58:34.7719385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T21:58:34.7719807Z outputs = self.roberta( 2025-08-14T21:58:34.7720192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:34.7720595Z encoder_outputs = self.encoder( 2025-08-14T21:58:34.7720987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:34.7721375Z layer_outputs = layer_module( 2025-08-14T21:58:34.7721726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:34.7722093Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:34.7722495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T21:58:34.7722892Z self_attention_outputs = self.attention( 2025-08-14T21:58:34.7723281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:34.7723658Z return func(*args, **kwargs) 2025-08-14T21:58:34.7724034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T21:58:34.7724432Z self_outputs = self.self( 2025-08-14T21:58:34.7724797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:34.7725172Z return func(*args, **kwargs) 2025-08-14T21:58:34.7725547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T21:58:34.7726007Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:34.7726192Z 2025-08-14T21:58:34.7726279Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7726489Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7726720Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:34.7727088Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:34.7727418Z return mod(**inputs) 2025-08-14T21:58:34.7727792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T21:58:34.7728185Z outputs = self.roberta( 2025-08-14T21:58:34.7728563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:34.7728958Z encoder_outputs = self.encoder( 2025-08-14T21:58:34.7729346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:34.7729740Z layer_outputs = layer_module( 2025-08-14T21:58:34.7730090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:34.7730450Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:34.7730855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T21:58:34.7731267Z layer_output = apply_chunking_to_forward( 2025-08-14T21:58:34.7731712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:58:34.7732104Z return forward_fn(*input_tensors) 2025-08-14T21:58:34.7732533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T21:58:34.7733033Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:58:34.7733469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T21:58:34.7733903Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:58:34.7734337Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:58:34.7734685Z return self.act(input) 2025-08-14T21:58:34.7734795Z 2025-08-14T21:58:34.7734874Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7735089Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7735298Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7735499Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7735705Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7735911Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7736119Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7736319Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7736565Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:34.7736930Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:34.7737251Z return mod(**inputs) 2025-08-14T21:58:34.7737635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T21:58:34.7738033Z outputs = self.roberta( 2025-08-14T21:58:34.7738410Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:34.7738798Z encoder_outputs = self.encoder( 2025-08-14T21:58:34.7739191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:34.7739712Z layer_outputs = layer_module( 2025-08-14T21:58:34.7740079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:34.7740470Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:34.7740902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-08-14T21:58:34.7741350Z self_attention_outputs = self.attention( 2025-08-14T21:58:34.7741750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:34.7742329Z return func(*args, **kwargs) 2025-08-14T21:58:34.7742732Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-08-14T21:58:34.7743117Z self_outputs = self.self( 2025-08-14T21:58:34.7743460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-08-14T21:58:34.7743833Z return func(*args, **kwargs) 2025-08-14T21:58:34.7744218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-08-14T21:58:34.7744669Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-08-14T21:58:34.7744861Z 2025-08-14T21:58:34.7744942Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7745153Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7745393Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:34.7745746Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:34.7746153Z return mod(**inputs) 2025-08-14T21:58:34.7746533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-08-14T21:58:34.7746925Z outputs = self.roberta( 2025-08-14T21:58:34.7747334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-08-14T21:58:34.7747734Z encoder_outputs = self.encoder( 2025-08-14T21:58:34.7748128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-08-14T21:58:34.7748580Z layer_outputs = layer_module( 2025-08-14T21:58:34.7748932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:58:34.7749298Z return super().__call__(*args, **kwargs) 2025-08-14T21:58:34.7749705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 569, in forward 2025-08-14T21:58:34.7750120Z layer_output = apply_chunking_to_forward( 2025-08-14T21:58:34.7750520Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T21:58:34.7750924Z return forward_fn(*input_tensors) 2025-08-14T21:58:34.7751357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 577, in feed_forward_chunk 2025-08-14T21:58:34.7751844Z intermediate_output = self.intermediate(attention_output) 2025-08-14T21:58:34.7752287Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 493, in forward 2025-08-14T21:58:34.7752722Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T21:58:34.7753113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T21:58:34.7753453Z return self.act(input) 2025-08-14T21:58:34.7753573Z 2025-08-14T21:58:34.7753654Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7753869Z cudagraph partition due to non gpu ops 2025-08-14T21:58:34.7754108Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:34.7754468Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:34.7754798Z return mod(**inputs) 2025-08-14T21:58:34.7755183Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1548, in forward 2025-08-14T21:58:34.7755617Z start_loss = loss_fct(start_logits, start_positions) 2025-08-14T21:58:34.7755784Z 2025-08-14T21:58:34.7755887Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:58:34.7756247Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:58:34.7756574Z return mod(**inputs) 2025-08-14T21:58:34.7756946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1549, in forward 2025-08-14T21:58:34.7757371Z end_loss = loss_fct(end_logits, end_positions) 2025-08-14T21:58:34.7757521Z 2025-08-14T21:58:43.1685767Z Compilation time (from dynamo_timed): 16.992627855 2025-08-14T21:58:43.1686243Z pass 2025-08-14T21:58:43.1687342Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:58:43.1688259Z TIMING: _recursive_pre_grad_passes:0.03599 _recursive_joint_graph_passes:0.37348 _recursive_post_grad_passes:0.08575 async_compile.wait:0.0029 code_gen:8.16783 inductor_compile:9.89231 backend_compile:14.20145 gc:0.00158 entire_frame_compile:16.99263 total_wall_time:16.99263 2025-08-14T21:58:43.1690976Z STATS: call_* op count: 303 | FakeTensorMode.__torch_dispatch__:24185 | FakeTensor.__torch_dispatch__:3941 | ProxyTorchDispatchMode.__torch_dispatch__:5386 2025-08-14T21:58:43.1691767Z Dynamo produced 1 graphs covering 303 ops with 0 graph breaks (0 unique) 2025-08-14T21:58:49.0825254Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:58:49.0827339Z from pkg_resources import resource_filename 2025-08-14T21:58:49.8949026Z 2025-08-14T21:58:51.0854169Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:58:51.0855019Z loading model: 0it [00:01, ?it/s] 2025-08-14T21:58:51.0864281Z cpu eval T5ForConditionalGeneration 2025-08-14T21:58:52.5153542Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:58:52.9503428Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:58:53.4692303Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:59:05.9954049Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:05.9954602Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:05.9955011Z return mod(**inputs) 2025-08-14T21:59:05.9955458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:05.9955899Z decoder_outputs = self.decoder( 2025-08-14T21:59:05.9956326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:05.9956751Z layer_outputs = layer_module( 2025-08-14T21:59:05.9957167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:05.9957598Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:05.9958064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:05.9958497Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:05.9958923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:05.9959361Z attention_output = self.SelfAttention( 2025-08-14T21:59:05.9959781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 546, in forward 2025-08-14T21:59:05.9960236Z position_bias = position_bias + causal_mask 2025-08-14T21:59:05.9960412Z 2025-08-14T21:59:05.9960539Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:05.9960949Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:05.9961312Z return mod(**inputs) 2025-08-14T21:59:05.9961711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:05.9962137Z decoder_outputs = self.decoder( 2025-08-14T21:59:05.9962537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:05.9962957Z layer_outputs = layer_module( 2025-08-14T21:59:05.9963351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:05.9963789Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:05.9964203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:05.9964710Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:05.9965134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:05.9965561Z attention_output = self.SelfAttention( 2025-08-14T21:59:05.9966513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T21:59:05.9966996Z query_states = self.q(hidden_states) 2025-08-14T21:59:05.9967153Z 2025-08-14T21:59:05.9967307Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:05.9967769Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:05.9968130Z return mod(**inputs) 2025-08-14T21:59:05.9968528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:05.9969049Z decoder_outputs = self.decoder( 2025-08-14T21:59:05.9969506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:05.9969923Z layer_outputs = layer_module( 2025-08-14T21:59:05.9970314Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:05.9970717Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:05.9971137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:05.9971557Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:05.9971967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:05.9972380Z attention_output = self.SelfAttention( 2025-08-14T21:59:05.9972800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T21:59:05.9973231Z key_states = self.k(current_states) 2025-08-14T21:59:05.9973385Z 2025-08-14T21:59:05.9973517Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:05.9973910Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:05.9974274Z return mod(**inputs) 2025-08-14T21:59:05.9974666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:05.9975081Z decoder_outputs = self.decoder( 2025-08-14T21:59:05.9975504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:05.9975920Z layer_outputs = layer_module( 2025-08-14T21:59:05.9976304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:05.9976998Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:05.9977422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:05.9977843Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:05.9978249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:05.9978694Z attention_output = self.SelfAttention( 2025-08-14T21:59:05.9979113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T21:59:05.9979836Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:59:05.9980049Z 2025-08-14T21:59:05.9980168Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:05.9980567Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:05.9980924Z return mod(**inputs) 2025-08-14T21:59:05.9981311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:05.9981710Z decoder_outputs = self.decoder( 2025-08-14T21:59:05.9982107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:05.9982554Z layer_outputs = layer_module( 2025-08-14T21:59:05.9982933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:05.9983340Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:05.9983774Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:05.9984192Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:05.9984599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:05.9985047Z attention_output = self.SelfAttention( 2025-08-14T21:59:05.9985475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T21:59:05.9985892Z value_states = self.v(current_states) 2025-08-14T21:59:05.9986048Z 2025-08-14T21:59:05.9986166Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:05.9986572Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:05.9986940Z return mod(**inputs) 2025-08-14T21:59:05.9987322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:05.9987742Z decoder_outputs = self.decoder( 2025-08-14T21:59:05.9988155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:05.9988639Z layer_outputs = layer_module( 2025-08-14T21:59:05.9989020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:05.9989417Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:05.9989859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:05.9990265Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:05.9990675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:05.9991110Z attention_output = self.SelfAttention( 2025-08-14T21:59:05.9991522Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T21:59:05.9991959Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:59:05.9992145Z 2025-08-14T21:59:05.9992259Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:05.9992654Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:05.9993006Z return mod(**inputs) 2025-08-14T21:59:05.9993386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:05.9993793Z decoder_outputs = self.decoder( 2025-08-14T21:59:05.9994194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:05.9994593Z layer_outputs = layer_module( 2025-08-14T21:59:05.9994974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:05.9995373Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:05.9995785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:05.9996186Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:05.9996600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:05.9997013Z attention_output = self.SelfAttention( 2025-08-14T21:59:05.9997409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T21:59:05.9997877Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:05.9998068Z 2025-08-14T21:59:05.9998183Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:05.9998577Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:05.9998949Z return mod(**inputs) 2025-08-14T21:59:05.9999335Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:05.9999740Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0000152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0000629Z layer_outputs = layer_module( 2025-08-14T21:59:06.0001017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0001431Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0001836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0002255Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0002672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0003092Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0003499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T21:59:06.0003916Z attn_output = self.o(attn_output) 2025-08-14T21:59:06.0004066Z 2025-08-14T21:59:06.0004196Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0004589Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0004952Z return mod(**inputs) 2025-08-14T21:59:06.0005360Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0005772Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0006168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0006579Z layer_outputs = layer_module( 2025-08-14T21:59:06.0006962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0007354Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0007766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0008183Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0008594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0009011Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0009424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T21:59:06.0009840Z value_states = self.v(current_states) 2025-08-14T21:59:06.0009993Z 2025-08-14T21:59:06.0010119Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0010512Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0010875Z return mod(**inputs) 2025-08-14T21:59:06.0011277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0011691Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0012116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0012522Z layer_outputs = layer_module( 2025-08-14T21:59:06.0012935Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0013324Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0013726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0014167Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0014581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0015002Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0015416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T21:59:06.0015874Z query_states = self.q(hidden_states) 2025-08-14T21:59:06.0016025Z 2025-08-14T21:59:06.0016140Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0016534Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0016891Z return mod(**inputs) 2025-08-14T21:59:06.0017272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0017692Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0018107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0018515Z layer_outputs = layer_module( 2025-08-14T21:59:06.0018891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0019294Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0019800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0020222Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0020638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0021054Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0021469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T21:59:06.0021905Z key_states = self.k(current_states) 2025-08-14T21:59:06.0022065Z 2025-08-14T21:59:06.0022182Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0022582Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0022937Z return mod(**inputs) 2025-08-14T21:59:06.0023315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0023722Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0024124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0024521Z layer_outputs = layer_module( 2025-08-14T21:59:06.0024906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0025302Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0025709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0026116Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0026521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0026943Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0027349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T21:59:06.0027815Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:59:06.0028021Z 2025-08-14T21:59:06.0028135Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0028563Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0028916Z return mod(**inputs) 2025-08-14T21:59:06.0029306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0029743Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0030141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0030537Z layer_outputs = layer_module( 2025-08-14T21:59:06.0030974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0031369Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0031761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0032173Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0032577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0032989Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0033387Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T21:59:06.0033821Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:59:06.0033995Z 2025-08-14T21:59:06.0034115Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0034490Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0034834Z return mod(**inputs) 2025-08-14T21:59:06.0035200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0035592Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0035973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0036362Z layer_outputs = layer_module( 2025-08-14T21:59:06.0036729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0037111Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0037494Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0037889Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0038284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0038679Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0039073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T21:59:06.0039499Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:06.0039669Z 2025-08-14T21:59:06.0039785Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0040161Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0040506Z return mod(**inputs) 2025-08-14T21:59:06.0040875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0041261Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0041646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0042332Z layer_outputs = layer_module( 2025-08-14T21:59:06.0042715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0043103Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0043597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0044002Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0044388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0044832Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0045243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T21:59:06.0045658Z attn_output = self.o(attn_output) 2025-08-14T21:59:06.0045833Z 2025-08-14T21:59:06.0045978Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0046371Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0046720Z return mod(**inputs) 2025-08-14T21:59:06.0047091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0047485Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0047873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0048274Z layer_outputs = layer_module( 2025-08-14T21:59:06.0048640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0049034Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0049433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:06.0049840Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:06.0050231Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:06.0050634Z attention_output = self.EncDecAttention( 2025-08-14T21:59:06.0051034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T21:59:06.0051424Z query_states = self.q(hidden_states) 2025-08-14T21:59:06.0051577Z 2025-08-14T21:59:06.0051687Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0052073Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0052418Z return mod(**inputs) 2025-08-14T21:59:06.0052780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0053178Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0053567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0053957Z layer_outputs = layer_module( 2025-08-14T21:59:06.0054322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0054702Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0055094Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:06.0055501Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:06.0055913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:06.0056355Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:06.0056792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T21:59:06.0057188Z hidden_states = self.wi(hidden_states) 2025-08-14T21:59:06.0057340Z 2025-08-14T21:59:06.0057451Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0057840Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0058209Z return mod(**inputs) 2025-08-14T21:59:06.0058587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0058999Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0059421Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0059890Z layer_outputs = layer_module( 2025-08-14T21:59:06.0060277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0060723Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0061128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:06.0061563Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:06.0061989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:06.0062442Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:06.0062898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T21:59:06.0063325Z hidden_states = self.act(hidden_states) 2025-08-14T21:59:06.0063483Z 2025-08-14T21:59:06.0063601Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0063998Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0064346Z return mod(**inputs) 2025-08-14T21:59:06.0064732Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0065142Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0065532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0065943Z layer_outputs = layer_module( 2025-08-14T21:59:06.0066327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0066733Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0067135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:06.0067563Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:06.0067987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:06.0068441Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:06.0068889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T21:59:06.0069306Z hidden_states = self.wo(hidden_states) 2025-08-14T21:59:06.0069453Z 2025-08-14T21:59:06.0069577Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0069968Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0070332Z return mod(**inputs) 2025-08-14T21:59:06.0070715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0071121Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0071515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0071917Z layer_outputs = layer_module( 2025-08-14T21:59:06.0072298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0072690Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0073097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0073544Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0073956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0074388Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0074797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T21:59:06.0075202Z query_states = self.q(hidden_states) 2025-08-14T21:59:06.0075348Z 2025-08-14T21:59:06.0075461Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0075904Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0076265Z return mod(**inputs) 2025-08-14T21:59:06.0076652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0077041Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0077433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0077829Z layer_outputs = layer_module( 2025-08-14T21:59:06.0078201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0078591Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0078987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0079391Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0079784Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0080185Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0080582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T21:59:06.0080974Z key_states = self.k(current_states) 2025-08-14T21:59:06.0081115Z 2025-08-14T21:59:06.0081228Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0081612Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0081961Z return mod(**inputs) 2025-08-14T21:59:06.0082322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0082720Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0083114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0083509Z layer_outputs = layer_module( 2025-08-14T21:59:06.0083873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0084258Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0084655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0085052Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0085451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0085855Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0086252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T21:59:06.0086697Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:59:06.0086901Z 2025-08-14T21:59:06.0087014Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0087408Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0087754Z return mod(**inputs) 2025-08-14T21:59:06.0088151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0088561Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0088949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0089360Z layer_outputs = layer_module( 2025-08-14T21:59:06.0089731Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0090113Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0090531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0090954Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0091357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0091759Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0092156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T21:59:06.0092557Z value_states = self.v(current_states) 2025-08-14T21:59:06.0092709Z 2025-08-14T21:59:06.0092821Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0093209Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0093564Z return mod(**inputs) 2025-08-14T21:59:06.0093954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0094373Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0094775Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0095172Z layer_outputs = layer_module( 2025-08-14T21:59:06.0095546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0095946Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0096346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0096755Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0097158Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0097564Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0097964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T21:59:06.0098407Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:59:06.0098581Z 2025-08-14T21:59:06.0098698Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0099077Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0099528Z return mod(**inputs) 2025-08-14T21:59:06.0099948Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0100364Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0100800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0101221Z layer_outputs = layer_module( 2025-08-14T21:59:06.0101600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0101996Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0102403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0102821Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0103268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0103682Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0104096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T21:59:06.0104573Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:06.0104751Z 2025-08-14T21:59:06.0104872Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0105268Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0105650Z return mod(**inputs) 2025-08-14T21:59:06.0106053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0106450Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0106853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0107257Z layer_outputs = layer_module( 2025-08-14T21:59:06.0107635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0108021Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0108424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0108831Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0109231Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0109648Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0110058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T21:59:06.0110463Z attn_output = self.o(attn_output) 2025-08-14T21:59:06.0110605Z 2025-08-14T21:59:06.0110721Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0111117Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0111477Z return mod(**inputs) 2025-08-14T21:59:06.0111857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0112253Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0112650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0113052Z layer_outputs = layer_module( 2025-08-14T21:59:06.0113426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0113822Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0114231Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:06.0114643Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:06.0115044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:06.0115481Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:06.0115913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T21:59:06.0116305Z hidden_states = self.wi(hidden_states) 2025-08-14T21:59:06.0116455Z 2025-08-14T21:59:06.0116567Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0116949Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0117293Z return mod(**inputs) 2025-08-14T21:59:06.0117653Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0118073Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0118467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0118862Z layer_outputs = layer_module( 2025-08-14T21:59:06.0119261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0119642Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0120034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:06.0120437Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:06.0120890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:06.0121328Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:06.0121761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T21:59:06.0122153Z hidden_states = self.act(hidden_states) 2025-08-14T21:59:06.0122307Z 2025-08-14T21:59:06.0122418Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0122805Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0123142Z return mod(**inputs) 2025-08-14T21:59:06.0123516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0123911Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0124298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0124666Z layer_outputs = layer_module( 2025-08-14T21:59:06.0125015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0125382Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0125746Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:06.0126136Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:06.0126524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:06.0126934Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:06.0127330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T21:59:06.0127710Z hidden_states = self.wo(hidden_states) 2025-08-14T21:59:06.0127852Z 2025-08-14T21:59:06.0127957Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0128321Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0128640Z return mod(**inputs) 2025-08-14T21:59:06.0128991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0129364Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0129723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0130101Z layer_outputs = layer_module( 2025-08-14T21:59:06.0130451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0130825Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0131213Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0131613Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0132008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0132430Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0132834Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T21:59:06.0133236Z query_states = self.q(hidden_states) 2025-08-14T21:59:06.0133404Z 2025-08-14T21:59:06.0133525Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0133903Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0134251Z return mod(**inputs) 2025-08-14T21:59:06.0134646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0135066Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0135457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0135861Z layer_outputs = layer_module( 2025-08-14T21:59:06.0136227Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0136601Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0136994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0137406Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0137802Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0138205Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0138613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T21:59:06.0139017Z key_states = self.k(current_states) 2025-08-14T21:59:06.0139159Z 2025-08-14T21:59:06.0139270Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0139767Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0140122Z return mod(**inputs) 2025-08-14T21:59:06.0140515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0140934Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0141352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0141966Z layer_outputs = layer_module( 2025-08-14T21:59:06.0142342Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0142728Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0143135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0143546Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0143938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0144339Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0144737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T21:59:06.0145190Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:59:06.0145382Z 2025-08-14T21:59:06.0145495Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0145876Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0146224Z return mod(**inputs) 2025-08-14T21:59:06.0146583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0146981Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0147461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0147859Z layer_outputs = layer_module( 2025-08-14T21:59:06.0148221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0148639Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0149038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0149446Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0149886Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0150328Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0150734Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T21:59:06.0151132Z value_states = self.v(current_states) 2025-08-14T21:59:06.0151283Z 2025-08-14T21:59:06.0151395Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0151783Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0152131Z return mod(**inputs) 2025-08-14T21:59:06.0152500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0152893Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0153281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0153666Z layer_outputs = layer_module( 2025-08-14T21:59:06.0154035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0154423Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0154817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0155207Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0155602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0156003Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0156386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T21:59:06.0156817Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:59:06.0156996Z 2025-08-14T21:59:06.0157106Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0157492Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0157829Z return mod(**inputs) 2025-08-14T21:59:06.0158196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0158595Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0158980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0159364Z layer_outputs = layer_module( 2025-08-14T21:59:06.0159735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0160120Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0160502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0160905Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0161298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0161702Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0162117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T21:59:06.0162547Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:06.0162715Z 2025-08-14T21:59:06.0162831Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0163228Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0163573Z return mod(**inputs) 2025-08-14T21:59:06.0163938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0164332Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0164754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0165145Z layer_outputs = layer_module( 2025-08-14T21:59:06.0165511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0165898Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0166293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0166704Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0167103Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0167503Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0167902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T21:59:06.0168310Z attn_output = self.o(attn_output) 2025-08-14T21:59:06.0168449Z 2025-08-14T21:59:06.0168567Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0168945Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0169290Z return mod(**inputs) 2025-08-14T21:59:06.0169658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0170047Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0170433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0170827Z layer_outputs = layer_module( 2025-08-14T21:59:06.0171197Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0171583Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0171979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:06.0172394Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:06.0172790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:06.0173235Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:06.0173673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T21:59:06.0174088Z hidden_states = self.wi(hidden_states) 2025-08-14T21:59:06.0174236Z 2025-08-14T21:59:06.0174349Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0174746Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0175100Z return mod(**inputs) 2025-08-14T21:59:06.0175482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0175883Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0176282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0176690Z layer_outputs = layer_module( 2025-08-14T21:59:06.0177097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0177496Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0177905Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:06.0178345Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:06.0178751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:06.0179193Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:06.0179765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T21:59:06.0180181Z hidden_states = self.act(hidden_states) 2025-08-14T21:59:06.0180341Z 2025-08-14T21:59:06.0180455Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0180854Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0181213Z return mod(**inputs) 2025-08-14T21:59:06.0181585Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0182000Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0182401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0182805Z layer_outputs = layer_module( 2025-08-14T21:59:06.0183178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0183571Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0183973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:06.0184389Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:06.0184807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:06.0185250Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:06.0185695Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T21:59:06.0186108Z hidden_states = self.wo(hidden_states) 2025-08-14T21:59:06.0186264Z 2025-08-14T21:59:06.0186377Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0186775Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0187126Z return mod(**inputs) 2025-08-14T21:59:06.0187509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0187916Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0188320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0188716Z layer_outputs = layer_module( 2025-08-14T21:59:06.0189096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0189496Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0189894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0190307Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0190716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0191136Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0191532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T21:59:06.0191971Z query_states = self.q(hidden_states) 2025-08-14T21:59:06.0192119Z 2025-08-14T21:59:06.0192239Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0192633Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0193004Z return mod(**inputs) 2025-08-14T21:59:06.0193377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0193771Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0194149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0194582Z layer_outputs = layer_module( 2025-08-14T21:59:06.0194950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0195339Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0195722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0196122Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0196512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0196900Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0197289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T21:59:06.0197681Z key_states = self.k(current_states) 2025-08-14T21:59:06.0197821Z 2025-08-14T21:59:06.0197943Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0198312Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0198657Z return mod(**inputs) 2025-08-14T21:59:06.0199020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0199410Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0199783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0200174Z layer_outputs = layer_module( 2025-08-14T21:59:06.0200538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0200913Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0201303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0201712Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0202101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0202486Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0202877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T21:59:06.0203321Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:59:06.0203512Z 2025-08-14T21:59:06.0203623Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0204034Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0204378Z return mod(**inputs) 2025-08-14T21:59:06.0204742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0205126Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0205511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0205899Z layer_outputs = layer_module( 2025-08-14T21:59:06.0206290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0206675Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0207067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0207521Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0207906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0208303Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0208706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T21:59:06.0209142Z value_states = self.v(current_states) 2025-08-14T21:59:06.0209289Z 2025-08-14T21:59:06.0209402Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0209785Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0210133Z return mod(**inputs) 2025-08-14T21:59:06.0210501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0210877Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0211257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0211629Z layer_outputs = layer_module( 2025-08-14T21:59:06.0211971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0212334Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0212710Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0213111Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0213510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0213914Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0214309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T21:59:06.0214733Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:59:06.0214915Z 2025-08-14T21:59:06.0215024Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0215406Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0215755Z return mod(**inputs) 2025-08-14T21:59:06.0216117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0216511Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0216897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0217282Z layer_outputs = layer_module( 2025-08-14T21:59:06.0217648Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0218032Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0218436Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0218840Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0219241Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0219759Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0220177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T21:59:06.0220645Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:06.0220833Z 2025-08-14T21:59:06.0221040Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0221440Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0221790Z return mod(**inputs) 2025-08-14T21:59:06.0222187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0222618Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0223040Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0223427Z layer_outputs = layer_module( 2025-08-14T21:59:06.0223867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0224279Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0224694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0225118Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0225536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0225958Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0226375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T21:59:06.0226789Z attn_output = self.o(attn_output) 2025-08-14T21:59:06.0226935Z 2025-08-14T21:59:06.0227060Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0227454Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0227825Z return mod(**inputs) 2025-08-14T21:59:06.0228216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0228633Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0229033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0229447Z layer_outputs = layer_module( 2025-08-14T21:59:06.0229834Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0230230Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0230649Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0231067Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0231489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 609, in forward 2025-08-14T21:59:06.0231955Z hidden_states = hidden_states + self.dropout(attention_output[0]) 2025-08-14T21:59:06.0232170Z 2025-08-14T21:59:06.0232288Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0232690Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0233058Z return mod(**inputs) 2025-08-14T21:59:06.0233442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0233859Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0234267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0234672Z layer_outputs = layer_module( 2025-08-14T21:59:06.0235063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0235469Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0235865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:06.0236252Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:06.0236670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:06.0237083Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:06.0237478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T21:59:06.0237874Z hidden_states = self.wi(hidden_states) 2025-08-14T21:59:06.0238016Z 2025-08-14T21:59:06.0238121Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0238483Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0238838Z return mod(**inputs) 2025-08-14T21:59:06.0239191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0239571Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0239936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0240296Z layer_outputs = layer_module( 2025-08-14T21:59:06.0240655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0241016Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0241383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:06.0241980Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:06.0242413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:06.0242856Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:06.0243290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T21:59:06.0243671Z hidden_states = self.act(hidden_states) 2025-08-14T21:59:06.0243809Z 2025-08-14T21:59:06.0243924Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0244281Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0244613Z return mod(**inputs) 2025-08-14T21:59:06.0244965Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0245341Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0245713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0246089Z layer_outputs = layer_module( 2025-08-14T21:59:06.0246440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0246804Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0247169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:06.0247555Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:06.0247938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:06.0248343Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:06.0248753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T21:59:06.0249127Z hidden_states = self.wo(hidden_states) 2025-08-14T21:59:06.0249264Z 2025-08-14T21:59:06.0249378Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0249731Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0250069Z return mod(**inputs) 2025-08-14T21:59:06.0250518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0250934Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0251328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0251753Z layer_outputs = layer_module( 2025-08-14T21:59:06.0252120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0252509Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0252921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0253386Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0253791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0254181Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0254583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T21:59:06.0254983Z query_states = self.q(hidden_states) 2025-08-14T21:59:06.0255123Z 2025-08-14T21:59:06.0255231Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0255611Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0255955Z return mod(**inputs) 2025-08-14T21:59:06.0256327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0256724Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0257118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0257517Z layer_outputs = layer_module( 2025-08-14T21:59:06.0257891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0258283Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0258682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0259088Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0259536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0259950Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0260348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T21:59:06.0260761Z key_states = self.k(current_states) 2025-08-14T21:59:06.0260914Z 2025-08-14T21:59:06.0261029Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0261427Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0261769Z return mod(**inputs) 2025-08-14T21:59:06.0262147Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0262546Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0262958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0263355Z layer_outputs = layer_module( 2025-08-14T21:59:06.0263718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0264108Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0264504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0264904Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0265317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0265747Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0266143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T21:59:06.0266583Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:59:06.0266806Z 2025-08-14T21:59:06.0266919Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0267301Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0267638Z return mod(**inputs) 2025-08-14T21:59:06.0268049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0268455Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0268842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0269228Z layer_outputs = layer_module( 2025-08-14T21:59:06.0269595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0269995Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0270406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0270802Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0271205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0271611Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0272007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T21:59:06.0272410Z value_states = self.v(current_states) 2025-08-14T21:59:06.0272563Z 2025-08-14T21:59:06.0272677Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0273072Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0273419Z return mod(**inputs) 2025-08-14T21:59:06.0273800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0274201Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0274588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0274991Z layer_outputs = layer_module( 2025-08-14T21:59:06.0275370Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0275766Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0276169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0276578Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0276984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0277385Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0277791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T21:59:06.0278237Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:59:06.0278414Z 2025-08-14T21:59:06.0278535Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0278922Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0279278Z return mod(**inputs) 2025-08-14T21:59:06.0279653Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0280060Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0280486Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0280898Z layer_outputs = layer_module( 2025-08-14T21:59:06.0281291Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0281711Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0282121Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0282536Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0282974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0283404Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0283799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T21:59:06.0284230Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:06.0284403Z 2025-08-14T21:59:06.0284519Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0284891Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0285235Z return mod(**inputs) 2025-08-14T21:59:06.0285602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0285989Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0286375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0286773Z layer_outputs = layer_module( 2025-08-14T21:59:06.0287145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0287519Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0287912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0288307Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0288692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0289088Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0289479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T21:59:06.0289869Z attn_output = self.o(attn_output) 2025-08-14T21:59:06.0290005Z 2025-08-14T21:59:06.0290117Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0290496Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0290842Z return mod(**inputs) 2025-08-14T21:59:06.0291203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0291592Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0291978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0292368Z layer_outputs = layer_module( 2025-08-14T21:59:06.0292730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0293115Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0293508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:06.0293914Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:06.0294331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:06.0294768Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:06.0295222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T21:59:06.0295610Z hidden_states = self.wi(hidden_states) 2025-08-14T21:59:06.0295760Z 2025-08-14T21:59:06.0295874Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0296288Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0296648Z return mod(**inputs) 2025-08-14T21:59:06.0297022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0297450Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0297870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0298265Z layer_outputs = layer_module( 2025-08-14T21:59:06.0298648Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0299038Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0299443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:06.0299952Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:06.0300376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:06.0300824Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:06.0301259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T21:59:06.0301675Z hidden_states = self.act(hidden_states) 2025-08-14T21:59:06.0301831Z 2025-08-14T21:59:06.0301945Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0302343Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0302704Z return mod(**inputs) 2025-08-14T21:59:06.0303086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0303492Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0303902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0304298Z layer_outputs = layer_module( 2025-08-14T21:59:06.0304673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0305070Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0305473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:06.0305894Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:06.0306318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:06.0306759Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:06.0307200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T21:59:06.0307609Z hidden_states = self.wo(hidden_states) 2025-08-14T21:59:06.0307756Z 2025-08-14T21:59:06.0307877Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0308266Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0308610Z return mod(**inputs) 2025-08-14T21:59:06.0309000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0309403Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0309805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0311193Z layer_outputs = layer_module( 2025-08-14T21:59:06.0311604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0312005Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0312430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0312843Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0313251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0313703Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0314120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T21:59:06.0314526Z query_states = self.q(hidden_states) 2025-08-14T21:59:06.0314673Z 2025-08-14T21:59:06.0314797Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0315188Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0315547Z return mod(**inputs) 2025-08-14T21:59:06.0315898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0316267Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0316637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0317004Z layer_outputs = layer_module( 2025-08-14T21:59:06.0317356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0317711Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0318079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0318460Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0318825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0319200Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0319576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T21:59:06.0319948Z key_states = self.k(current_states) 2025-08-14T21:59:06.0320080Z 2025-08-14T21:59:06.0320185Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0320550Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0320879Z return mod(**inputs) 2025-08-14T21:59:06.0321224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0321592Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0321961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0322338Z layer_outputs = layer_module( 2025-08-14T21:59:06.0322680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0323049Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0323424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0323808Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0324177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0324555Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0324942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T21:59:06.0325371Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:59:06.0325556Z 2025-08-14T21:59:06.0325668Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0326009Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0326351Z return mod(**inputs) 2025-08-14T21:59:06.0326672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0327032Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0327409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0327786Z layer_outputs = layer_module( 2025-08-14T21:59:06.0328111Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0328462Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0328818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0329175Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0329537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0329910Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0330270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T21:59:06.0330626Z value_states = self.v(current_states) 2025-08-14T21:59:06.0330767Z 2025-08-14T21:59:06.0330870Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0331222Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0331536Z return mod(**inputs) 2025-08-14T21:59:06.0331888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0332256Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0332620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0332981Z layer_outputs = layer_module( 2025-08-14T21:59:06.0333329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0333688Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0334051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0334427Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0334795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0335180Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0335562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T21:59:06.0335988Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:59:06.0336166Z 2025-08-14T21:59:06.0336279Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0336656Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0336991Z return mod(**inputs) 2025-08-14T21:59:06.0337354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0337753Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0338109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0338480Z layer_outputs = layer_module( 2025-08-14T21:59:06.0338848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0339212Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0339668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0340117Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0340530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0340935Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0341368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T21:59:06.0342036Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:06.0342219Z 2025-08-14T21:59:06.0342344Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0342730Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0343090Z return mod(**inputs) 2025-08-14T21:59:06.0343478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0343894Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0344282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0344689Z layer_outputs = layer_module( 2025-08-14T21:59:06.0345065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0345457Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0345868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0346289Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0346691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0347088Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0347504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T21:59:06.0347920Z attn_output = self.o(attn_output) 2025-08-14T21:59:06.0348071Z 2025-08-14T21:59:06.0348184Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0348576Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0348928Z return mod(**inputs) 2025-08-14T21:59:06.0349312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0349709Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0350116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0350531Z layer_outputs = layer_module( 2025-08-14T21:59:06.0350909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0351307Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0351720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0352127Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0352523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 609, in forward 2025-08-14T21:59:06.0352990Z hidden_states = hidden_states + self.dropout(attention_output[0]) 2025-08-14T21:59:06.0353200Z 2025-08-14T21:59:06.0353312Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0353696Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0354132Z return mod(**inputs) 2025-08-14T21:59:06.0354516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0354902Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0355298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0355669Z layer_outputs = layer_module( 2025-08-14T21:59:06.0356018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0356381Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0356673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:06.0356769Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:06.0357015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:06.0357132Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:06.0357371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T21:59:06.0357454Z hidden_states = self.wi(hidden_states) 2025-08-14T21:59:06.0357458Z 2025-08-14T21:59:06.0357561Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0357768Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0357835Z return mod(**inputs) 2025-08-14T21:59:06.0358073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0358155Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0358386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0358469Z layer_outputs = layer_module( 2025-08-14T21:59:06.0358686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0358765Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0359006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:06.0359096Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:06.0359326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:06.0359452Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:06.0359682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T21:59:06.0359767Z hidden_states = self.act(hidden_states) 2025-08-14T21:59:06.0359770Z 2025-08-14T21:59:06.0359874Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0360072Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0360144Z return mod(**inputs) 2025-08-14T21:59:06.0360378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:06.0360461Z encoder_outputs = self.encoder( 2025-08-14T21:59:06.0360691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0360763Z layer_outputs = layer_module( 2025-08-14T21:59:06.0360991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0361069Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0361298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:06.0361418Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:06.0361653Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:06.0361774Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:06.0362024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T21:59:06.0362103Z hidden_states = self.wo(hidden_states) 2025-08-14T21:59:06.0362107Z 2025-08-14T21:59:06.0362218Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0362474Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0362550Z return mod(**inputs) 2025-08-14T21:59:06.0362786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0362858Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0363098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0363169Z layer_outputs = layer_module( 2025-08-14T21:59:06.0363388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0363476Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0363707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:06.0363796Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:06.0364028Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:06.0364114Z attention_output = self.EncDecAttention( 2025-08-14T21:59:06.0364351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T21:59:06.0364430Z key_states = self.k(current_states) 2025-08-14T21:59:06.0364433Z 2025-08-14T21:59:06.0364547Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0364746Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0364814Z return mod(**inputs) 2025-08-14T21:59:06.0365055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0365130Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0365378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0365461Z layer_outputs = layer_module( 2025-08-14T21:59:06.0365680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0365765Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0365998Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:06.0366079Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:06.0366317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:06.0366402Z attention_output = self.EncDecAttention( 2025-08-14T21:59:06.0366634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T21:59:06.0366770Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:59:06.0366775Z 2025-08-14T21:59:06.0366880Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0367087Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0367154Z return mod(**inputs) 2025-08-14T21:59:06.0367412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0367495Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0367727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0367824Z layer_outputs = layer_module( 2025-08-14T21:59:06.0368045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0368123Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0368381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:06.0368477Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:06.0368711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:06.0368804Z attention_output = self.EncDecAttention( 2025-08-14T21:59:06.0369039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T21:59:06.0369125Z value_states = self.v(current_states) 2025-08-14T21:59:06.0369129Z 2025-08-14T21:59:06.0369232Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0369431Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0369505Z return mod(**inputs) 2025-08-14T21:59:06.0369742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0369818Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0370062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0370132Z layer_outputs = layer_module( 2025-08-14T21:59:06.0370360Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0370435Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0370667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:06.0370755Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:06.0370989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:06.0371079Z attention_output = self.EncDecAttention( 2025-08-14T21:59:06.0371314Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T21:59:06.0371425Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:59:06.0371429Z 2025-08-14T21:59:06.0371537Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0371739Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0371804Z return mod(**inputs) 2025-08-14T21:59:06.0372045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0372119Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0372363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0372433Z layer_outputs = layer_module( 2025-08-14T21:59:06.0372652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0372740Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0372971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:06.0373052Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:06.0373324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:06.0373409Z attention_output = self.EncDecAttention( 2025-08-14T21:59:06.0373646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T21:59:06.0373771Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:06.0373775Z 2025-08-14T21:59:06.0373877Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0374086Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0374182Z return mod(**inputs) 2025-08-14T21:59:06.0374442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0374515Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0374750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0374829Z layer_outputs = layer_module( 2025-08-14T21:59:06.0375045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0375124Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0375362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:06.0375441Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:06.0375681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:06.0375768Z attention_output = self.EncDecAttention( 2025-08-14T21:59:06.0376000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T21:59:06.0376086Z attn_output = self.o(attn_output) 2025-08-14T21:59:06.0376089Z 2025-08-14T21:59:06.0376193Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0376398Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0376465Z return mod(**inputs) 2025-08-14T21:59:06.0376702Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0376781Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0377019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0377091Z layer_outputs = layer_module( 2025-08-14T21:59:06.0377322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0377399Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0377639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:06.0377734Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:06.0377965Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:06.0378093Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:06.0378327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T21:59:06.0378406Z hidden_states = self.wi(hidden_states) 2025-08-14T21:59:06.0378418Z 2025-08-14T21:59:06.0378523Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0378725Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0378799Z return mod(**inputs) 2025-08-14T21:59:06.0379035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0379131Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0379375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0379519Z layer_outputs = layer_module( 2025-08-14T21:59:06.0379797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0379882Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0380135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:06.0380268Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:06.0380541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:06.0380671Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:06.0380936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T21:59:06.0381025Z hidden_states = self.act(hidden_states) 2025-08-14T21:59:06.0381029Z 2025-08-14T21:59:06.0381150Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0381369Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0381443Z return mod(**inputs) 2025-08-14T21:59:06.0381711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0381790Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0382057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0382145Z layer_outputs = layer_module( 2025-08-14T21:59:06.0382381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0382476Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0382736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:06.0382833Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:06.0383118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:06.0383243Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:06.0383513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T21:59:06.0383605Z hidden_states = self.wo(hidden_states) 2025-08-14T21:59:06.0383609Z 2025-08-14T21:59:06.0383721Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0383945Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0384017Z return mod(**inputs) 2025-08-14T21:59:06.0384284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0384371Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0384638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0384725Z layer_outputs = layer_module( 2025-08-14T21:59:06.0384972Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0385057Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0385336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0385428Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0385691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0385816Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0386081Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T21:59:06.0386173Z query_states = self.q(hidden_states) 2025-08-14T21:59:06.0386201Z 2025-08-14T21:59:06.0386315Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0386531Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0386610Z return mod(**inputs) 2025-08-14T21:59:06.0386894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0387006Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0387264Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0387341Z layer_outputs = layer_module( 2025-08-14T21:59:06.0387590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0387674Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0387928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0388025Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0388286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0388382Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0388641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T21:59:06.0388726Z key_states = self.k(current_states) 2025-08-14T21:59:06.0388730Z 2025-08-14T21:59:06.0388852Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0389072Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0389151Z return mod(**inputs) 2025-08-14T21:59:06.0389427Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0389508Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0389783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0389860Z layer_outputs = layer_module( 2025-08-14T21:59:06.0390099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0390196Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0390450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0390543Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0390809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0390897Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0391154Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T21:59:06.0391300Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:59:06.0391303Z 2025-08-14T21:59:06.0391415Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0391644Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0391718Z return mod(**inputs) 2025-08-14T21:59:06.0391985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0392063Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0392430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0392517Z layer_outputs = layer_module( 2025-08-14T21:59:06.0392753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0392869Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0393121Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0393208Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0393487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0393606Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0393856Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T21:59:06.0393950Z value_states = self.v(current_states) 2025-08-14T21:59:06.0393956Z 2025-08-14T21:59:06.0394067Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0394292Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0394366Z return mod(**inputs) 2025-08-14T21:59:06.0394623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0394711Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0394967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0395048Z layer_outputs = layer_module( 2025-08-14T21:59:06.0395295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0395381Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0395644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0395731Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0395986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0396088Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0396343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T21:59:06.0396468Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:59:06.0396473Z 2025-08-14T21:59:06.0396586Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0396805Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0396885Z return mod(**inputs) 2025-08-14T21:59:06.0397144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0397227Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0397494Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0397569Z layer_outputs = layer_module( 2025-08-14T21:59:06.0397810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0397893Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0398148Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0398249Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0398499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0398588Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0398882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T21:59:06.0398997Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:06.0399001Z 2025-08-14T21:59:06.0399117Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0399347Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0399416Z return mod(**inputs) 2025-08-14T21:59:06.0399672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0399748Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0400039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0400115Z layer_outputs = layer_module( 2025-08-14T21:59:06.0400349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0400441Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0400688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0400769Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0401011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0401091Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0401345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T21:59:06.0401430Z attn_output = self.o(attn_output) 2025-08-14T21:59:06.0401434Z 2025-08-14T21:59:06.0401542Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0401760Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0401838Z return mod(**inputs) 2025-08-14T21:59:06.0402084Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0402157Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0402391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0402471Z layer_outputs = layer_module( 2025-08-14T21:59:06.0402689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0402768Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0403015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:06.0403101Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:06.0403354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:06.0403444Z attention_output = self.EncDecAttention( 2025-08-14T21:59:06.0403688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T21:59:06.0403779Z query_states = self.q(hidden_states) 2025-08-14T21:59:06.0403784Z 2025-08-14T21:59:06.0403892Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0404104Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0404183Z return mod(**inputs) 2025-08-14T21:59:06.0404435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0404522Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0404769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0404843Z layer_outputs = layer_module( 2025-08-14T21:59:06.0405109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0405192Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0405441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:06.0405543Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:06.0405786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:06.0405881Z attention_output = self.EncDecAttention( 2025-08-14T21:59:06.0406158Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T21:59:06.0406241Z key_states = self.k(current_states) 2025-08-14T21:59:06.0406244Z 2025-08-14T21:59:06.0406362Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0406576Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0406653Z return mod(**inputs) 2025-08-14T21:59:06.0406915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0406995Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0407258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0407333Z layer_outputs = layer_module( 2025-08-14T21:59:06.0407571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0407662Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0407917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:06.0408010Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:06.0408258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:06.0408347Z attention_output = self.EncDecAttention( 2025-08-14T21:59:06.0408610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T21:59:06.0408750Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:59:06.0408754Z 2025-08-14T21:59:06.0408870Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0409082Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0409156Z return mod(**inputs) 2025-08-14T21:59:06.0409419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0409495Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0409749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0409831Z layer_outputs = layer_module( 2025-08-14T21:59:06.0410068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0410161Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0410412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:06.0410497Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:06.0410754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:06.0410847Z attention_output = self.EncDecAttention( 2025-08-14T21:59:06.0411093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T21:59:06.0411184Z value_states = self.v(current_states) 2025-08-14T21:59:06.0411210Z 2025-08-14T21:59:06.0411319Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0411541Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0411631Z return mod(**inputs) 2025-08-14T21:59:06.0411880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0411965Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0412213Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0412332Z layer_outputs = layer_module( 2025-08-14T21:59:06.0412567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0412649Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0412902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:06.0412985Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:06.0413230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:06.0413329Z attention_output = self.EncDecAttention( 2025-08-14T21:59:06.0413575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T21:59:06.0413697Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:59:06.0413701Z 2025-08-14T21:59:06.0413812Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0414021Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0414097Z return mod(**inputs) 2025-08-14T21:59:06.0414346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0414440Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0414685Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0414763Z layer_outputs = layer_module( 2025-08-14T21:59:06.0414999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0415080Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0415320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:06.0415414Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:06.0415656Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:06.0415750Z attention_output = self.EncDecAttention( 2025-08-14T21:59:06.0415991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T21:59:06.0416104Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:06.0416108Z 2025-08-14T21:59:06.0416223Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0416436Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0416504Z return mod(**inputs) 2025-08-14T21:59:06.0416759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0416837Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0417093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0417169Z layer_outputs = layer_module( 2025-08-14T21:59:06.0417398Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0417506Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0417750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:06.0417861Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:06.0418105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:06.0418192Z attention_output = self.EncDecAttention( 2025-08-14T21:59:06.0418444Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T21:59:06.0418573Z attn_output = self.o(attn_output) 2025-08-14T21:59:06.0418578Z 2025-08-14T21:59:06.0418688Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0418908Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0418979Z return mod(**inputs) 2025-08-14T21:59:06.0419242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0419322Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0419665Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0419762Z layer_outputs = layer_module( 2025-08-14T21:59:06.0420002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0420088Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0420356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:06.0420457Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:06.0420721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:06.0420850Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:06.0421107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T21:59:06.0421206Z hidden_states = self.wi(hidden_states) 2025-08-14T21:59:06.0421210Z 2025-08-14T21:59:06.0421323Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0421550Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0421620Z return mod(**inputs) 2025-08-14T21:59:06.0421879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0421968Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0422228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0422308Z layer_outputs = layer_module( 2025-08-14T21:59:06.0422560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0422644Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0422911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:06.0423016Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:06.0423269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:06.0423405Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:06.0423660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T21:59:06.0423758Z hidden_states = self.act(hidden_states) 2025-08-14T21:59:06.0423762Z 2025-08-14T21:59:06.0423900Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0424121Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0424199Z return mod(**inputs) 2025-08-14T21:59:06.0424466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0424566Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0424839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0424916Z layer_outputs = layer_module( 2025-08-14T21:59:06.0425198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0425286Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0425547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:06.0425654Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:06.0425903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:06.0426028Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:06.0426292Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T21:59:06.0426378Z hidden_states = self.wo(hidden_states) 2025-08-14T21:59:06.0426382Z 2025-08-14T21:59:06.0426501Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0426724Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0426796Z return mod(**inputs) 2025-08-14T21:59:06.0427069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0427149Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0427415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0427493Z layer_outputs = layer_module( 2025-08-14T21:59:06.0427730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0427824Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0428082Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0428170Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0428432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0428523Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0428787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T21:59:06.0428875Z query_states = self.q(hidden_states) 2025-08-14T21:59:06.0428879Z 2025-08-14T21:59:06.0428992Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0429215Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0429290Z return mod(**inputs) 2025-08-14T21:59:06.0429553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0429640Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0429905Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0429990Z layer_outputs = layer_module( 2025-08-14T21:59:06.0430225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0430309Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0430598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0430689Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0430955Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0431063Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0431322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T21:59:06.0431412Z key_states = self.k(current_states) 2025-08-14T21:59:06.0431434Z 2025-08-14T21:59:06.0431831Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0432058Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0432129Z return mod(**inputs) 2025-08-14T21:59:06.0432399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0432487Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0432745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0432825Z layer_outputs = layer_module( 2025-08-14T21:59:06.0433075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0433160Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0433429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0433518Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0433771Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0433870Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0434130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T21:59:06.0434272Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:59:06.0434285Z 2025-08-14T21:59:06.0434398Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0434614Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0434693Z return mod(**inputs) 2025-08-14T21:59:06.0434948Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0435029Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0435292Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0435369Z layer_outputs = layer_module( 2025-08-14T21:59:06.0435614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0435699Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0435950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0436044Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0436295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0436382Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0436642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T21:59:06.0436728Z value_states = self.v(current_states) 2025-08-14T21:59:06.0436732Z 2025-08-14T21:59:06.0436850Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0437063Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0437152Z return mod(**inputs) 2025-08-14T21:59:06.0437416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0437493Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0437766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0437850Z layer_outputs = layer_module( 2025-08-14T21:59:06.0438088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0438199Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0438467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0438554Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0438817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0438903Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0439161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T21:59:06.0439285Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:59:06.0439290Z 2025-08-14T21:59:06.0439401Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0439625Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0439696Z return mod(**inputs) 2025-08-14T21:59:06.0439957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0440044Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0440301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0440389Z layer_outputs = layer_module( 2025-08-14T21:59:06.0440628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0440713Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0440974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0441062Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0441317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0441417Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0441670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T21:59:06.0441934Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:06.0441941Z 2025-08-14T21:59:06.0442063Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0442282Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0442366Z return mod(**inputs) 2025-08-14T21:59:06.0442623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0442714Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0442970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0443049Z layer_outputs = layer_module( 2025-08-14T21:59:06.0443301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0443385Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0443637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0443807Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0444063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0444157Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0444448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T21:59:06.0444533Z attn_output = self.o(attn_output) 2025-08-14T21:59:06.0444537Z 2025-08-14T21:59:06.0444661Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0444949Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0445031Z return mod(**inputs) 2025-08-14T21:59:06.0445299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0445380Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0445660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0445739Z layer_outputs = layer_module( 2025-08-14T21:59:06.0445978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0446074Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0446333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0446427Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0446683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 609, in forward 2025-08-14T21:59:06.0446813Z hidden_states = hidden_states + self.dropout(attention_output[0]) 2025-08-14T21:59:06.0446816Z 2025-08-14T21:59:06.0446923Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0447117Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0447180Z return mod(**inputs) 2025-08-14T21:59:06.0447418Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0447491Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0447726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0447795Z layer_outputs = layer_module( 2025-08-14T21:59:06.0448013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0448101Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0448331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:06.0448419Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:06.0448650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:06.0448734Z attention_output = self.EncDecAttention( 2025-08-14T21:59:06.0448971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T21:59:06.0449051Z query_states = self.q(hidden_states) 2025-08-14T21:59:06.0449055Z 2025-08-14T21:59:06.0449157Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0449362Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0449430Z return mod(**inputs) 2025-08-14T21:59:06.0449670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0449743Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0450001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0450083Z layer_outputs = layer_module( 2025-08-14T21:59:06.0450300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0450395Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0450628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:06.0450705Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:06.0450952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:06.0451048Z attention_output = self.EncDecAttention( 2025-08-14T21:59:06.0451273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T21:59:06.0451355Z key_states = self.k(current_states) 2025-08-14T21:59:06.0451359Z 2025-08-14T21:59:06.0451458Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0451657Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0451725Z return mod(**inputs) 2025-08-14T21:59:06.0451959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0452037Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0452279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0452349Z layer_outputs = layer_module( 2025-08-14T21:59:06.0452569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0452642Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0452879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:06.0452958Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:06.0453189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:06.0453282Z attention_output = self.EncDecAttention( 2025-08-14T21:59:06.0453512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T21:59:06.0453640Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:59:06.0453650Z 2025-08-14T21:59:06.0453755Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0453951Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0454023Z return mod(**inputs) 2025-08-14T21:59:06.0454256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0454327Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0454566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0454639Z layer_outputs = layer_module( 2025-08-14T21:59:06.0454861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0454938Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0455170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:06.0455259Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:06.0455487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:06.0455569Z attention_output = self.EncDecAttention( 2025-08-14T21:59:06.0455829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T21:59:06.0455908Z value_states = self.v(current_states) 2025-08-14T21:59:06.0455912Z 2025-08-14T21:59:06.0456020Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0456235Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0456300Z return mod(**inputs) 2025-08-14T21:59:06.0456546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0456618Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0456890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0456962Z layer_outputs = layer_module( 2025-08-14T21:59:06.0457183Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0457268Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0457502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:06.0457583Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:06.0457827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:06.0457909Z attention_output = self.EncDecAttention( 2025-08-14T21:59:06.0458149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T21:59:06.0458260Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:59:06.0458263Z 2025-08-14T21:59:06.0458364Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0458571Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0458641Z return mod(**inputs) 2025-08-14T21:59:06.0458876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0458958Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0459195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0459273Z layer_outputs = layer_module( 2025-08-14T21:59:06.0459555Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0459644Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0459901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:06.0459984Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:06.0460239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:06.0460327Z attention_output = self.EncDecAttention( 2025-08-14T21:59:06.0460573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T21:59:06.0460701Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:06.0460706Z 2025-08-14T21:59:06.0460814Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0461022Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0461101Z return mod(**inputs) 2025-08-14T21:59:06.0461352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0461440Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0461688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0461788Z layer_outputs = layer_module( 2025-08-14T21:59:06.0462031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0462113Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0462377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:06.0462470Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:06.0462713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:06.0462805Z attention_output = self.EncDecAttention( 2025-08-14T21:59:06.0463090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T21:59:06.0463174Z attn_output = self.o(attn_output) 2025-08-14T21:59:06.0463178Z 2025-08-14T21:59:06.0463296Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0463507Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0463582Z return mod(**inputs) 2025-08-14T21:59:06.0463837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0463917Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0464172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0464244Z layer_outputs = layer_module( 2025-08-14T21:59:06.0464466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0464557Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0464800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:06.0464905Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:06.0465150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:06.0465273Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:06.0465529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T21:59:06.0465612Z hidden_states = self.wi(hidden_states) 2025-08-14T21:59:06.0465616Z 2025-08-14T21:59:06.0465731Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0465944Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0466014Z return mod(**inputs) 2025-08-14T21:59:06.0466271Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0466348Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0466597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0466679Z layer_outputs = layer_module( 2025-08-14T21:59:06.0466910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0467000Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0467244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:06.0467338Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:06.0467592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:06.0467715Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:06.0467961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T21:59:06.0468086Z hidden_states = self.act(hidden_states) 2025-08-14T21:59:06.0468091Z 2025-08-14T21:59:06.0468202Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0468419Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0468505Z return mod(**inputs) 2025-08-14T21:59:06.0468755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0468842Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0469111Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0469214Z layer_outputs = layer_module( 2025-08-14T21:59:06.0469448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0469530Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0469787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:06.0469880Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:06.0470127Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:06.0470258Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:06.0470506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T21:59:06.0470594Z hidden_states = self.wo(hidden_states) 2025-08-14T21:59:06.0470599Z 2025-08-14T21:59:06.0470709Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0470919Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0470997Z return mod(**inputs) 2025-08-14T21:59:06.0471250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0471326Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0471584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0471661Z layer_outputs = layer_module( 2025-08-14T21:59:06.0471903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0471984Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0472236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0472332Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0472580Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0472674Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0472925Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T21:59:06.0473006Z query_states = self.q(hidden_states) 2025-08-14T21:59:06.0473010Z 2025-08-14T21:59:06.0473126Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0473345Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0473415Z return mod(**inputs) 2025-08-14T21:59:06.0473677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0473756Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0474015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0474090Z layer_outputs = layer_module( 2025-08-14T21:59:06.0474342Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0474437Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0474694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0474812Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0475046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0475124Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0475382Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T21:59:06.0475474Z key_states = self.k(current_states) 2025-08-14T21:59:06.0475478Z 2025-08-14T21:59:06.0475578Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0475778Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0475844Z return mod(**inputs) 2025-08-14T21:59:06.0476083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0476154Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0476383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0476460Z layer_outputs = layer_module( 2025-08-14T21:59:06.0476674Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0476753Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0476993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0477441Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0477809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0478190Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0478564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T21:59:06.0478984Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:59:06.0479179Z 2025-08-14T21:59:06.0479284Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0479650Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0479971Z return mod(**inputs) 2025-08-14T21:59:06.0480311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0480680Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0481043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0481407Z layer_outputs = layer_module( 2025-08-14T21:59:06.0481750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0482114Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0482490Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0482858Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0483231Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0483624Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0483995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T21:59:06.0484353Z value_states = self.v(current_states) 2025-08-14T21:59:06.0484554Z 2025-08-14T21:59:06.0484706Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0485083Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0485399Z return mod(**inputs) 2025-08-14T21:59:06.0485736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0486117Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0486472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0486832Z layer_outputs = layer_module( 2025-08-14T21:59:06.0487199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0487552Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0487915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0488282Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0488638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0488999Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0489363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T21:59:06.0489754Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:59:06.0489922Z 2025-08-14T21:59:06.0490021Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0490380Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0490704Z return mod(**inputs) 2025-08-14T21:59:06.0491034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0491395Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0491754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0492106Z layer_outputs = layer_module( 2025-08-14T21:59:06.0492447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0492804Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0493165Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0493523Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0493890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0494257Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0494611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T21:59:06.0495005Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:06.0495172Z 2025-08-14T21:59:06.0495273Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0495625Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0495934Z return mod(**inputs) 2025-08-14T21:59:06.0496283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0496654Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0497024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0497392Z layer_outputs = layer_module( 2025-08-14T21:59:06.0497742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0498114Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0498508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0498899Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0499278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0499770Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0500150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T21:59:06.0500540Z attn_output = self.o(attn_output) 2025-08-14T21:59:06.0500704Z 2025-08-14T21:59:06.0500890Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0501284Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0501639Z return mod(**inputs) 2025-08-14T21:59:06.0502025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0502439Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0502823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0503226Z layer_outputs = layer_module( 2025-08-14T21:59:06.0503600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0503996Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0504387Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:06.0504799Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:06.0505212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:06.0505613Z attention_output = self.EncDecAttention( 2025-08-14T21:59:06.0506021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T21:59:06.0506422Z query_states = self.q(hidden_states) 2025-08-14T21:59:06.0506570Z 2025-08-14T21:59:06.0506691Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0507075Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0507433Z return mod(**inputs) 2025-08-14T21:59:06.0507812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0508207Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0508640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0509072Z layer_outputs = layer_module( 2025-08-14T21:59:06.0509452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0509837Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0510258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:06.0510685Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:06.0511084Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:06.0511546Z attention_output = self.EncDecAttention( 2025-08-14T21:59:06.0511976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T21:59:06.0512403Z key_states = self.k(current_states) 2025-08-14T21:59:06.0512546Z 2025-08-14T21:59:06.0512658Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0513056Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0513474Z return mod(**inputs) 2025-08-14T21:59:06.0513881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0514270Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0514694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0515085Z layer_outputs = layer_module( 2025-08-14T21:59:06.0515470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0515923Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0516376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:06.0516810Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:06.0517187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:06.0517551Z attention_output = self.EncDecAttention( 2025-08-14T21:59:06.0517915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T21:59:06.0518319Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:59:06.0518500Z 2025-08-14T21:59:06.0518601Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0518959Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0519283Z return mod(**inputs) 2025-08-14T21:59:06.0519625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0519988Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0520343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0520702Z layer_outputs = layer_module( 2025-08-14T21:59:06.0521025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0521368Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0521727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:06.0522081Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:06.0522437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:06.0522801Z attention_output = self.EncDecAttention( 2025-08-14T21:59:06.0523161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T21:59:06.0523535Z value_states = self.v(current_states) 2025-08-14T21:59:06.0523672Z 2025-08-14T21:59:06.0523772Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0524113Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0524420Z return mod(**inputs) 2025-08-14T21:59:06.0524765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0525124Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0525475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0525835Z layer_outputs = layer_module( 2025-08-14T21:59:06.0526174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0526525Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0526870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:06.0527271Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:06.0527633Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:06.0528006Z attention_output = self.EncDecAttention( 2025-08-14T21:59:06.0528386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T21:59:06.0528788Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:59:06.0528951Z 2025-08-14T21:59:06.0529059Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0529447Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0529766Z return mod(**inputs) 2025-08-14T21:59:06.0530100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0530460Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0530807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0531167Z layer_outputs = layer_module( 2025-08-14T21:59:06.0531497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0531848Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0532197Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:06.0532560Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:06.0532918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:06.0533282Z attention_output = self.EncDecAttention( 2025-08-14T21:59:06.0533650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T21:59:06.0534052Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:06.0534214Z 2025-08-14T21:59:06.0534326Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0534673Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0534996Z return mod(**inputs) 2025-08-14T21:59:06.0535343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0535712Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0536063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0536446Z layer_outputs = layer_module( 2025-08-14T21:59:06.0536779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0537138Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0537509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:06.0537889Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:06.0538251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:06.0538630Z attention_output = self.EncDecAttention( 2025-08-14T21:59:06.0538996Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T21:59:06.0539368Z attn_output = self.o(attn_output) 2025-08-14T21:59:06.0539585Z 2025-08-14T21:59:06.0539700Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0540068Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0540398Z return mod(**inputs) 2025-08-14T21:59:06.0540798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0541211Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0541615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0542225Z layer_outputs = layer_module( 2025-08-14T21:59:06.0542618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0543022Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0543512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:06.0543931Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:06.0544326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 647, in forward 2025-08-14T21:59:06.0544781Z layer_output = hidden_states + self.dropout(attention_output[0]) 2025-08-14T21:59:06.0544980Z 2025-08-14T21:59:06.0545099Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0545483Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0545834Z return mod(**inputs) 2025-08-14T21:59:06.0546204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0546593Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0546987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0547385Z layer_outputs = layer_module( 2025-08-14T21:59:06.0547755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0548134Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0548538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:06.0548956Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:06.0549363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:06.0549811Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:06.0550251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T21:59:06.0550624Z hidden_states = self.wi(hidden_states) 2025-08-14T21:59:06.0550771Z 2025-08-14T21:59:06.0550877Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0551240Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0551564Z return mod(**inputs) 2025-08-14T21:59:06.0551916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0552281Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0552656Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0553054Z layer_outputs = layer_module( 2025-08-14T21:59:06.0553410Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0553788Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0554189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:06.0554604Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:06.0555010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:06.0555444Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:06.0555919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T21:59:06.0556325Z hidden_states = self.act(hidden_states) 2025-08-14T21:59:06.0556483Z 2025-08-14T21:59:06.0556623Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0557006Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0557348Z return mod(**inputs) 2025-08-14T21:59:06.0557721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0558176Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0558570Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0558961Z layer_outputs = layer_module( 2025-08-14T21:59:06.0559324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0559706Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0560101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:06.0560488Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:06.0560860Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:06.0561259Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:06.0561653Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T21:59:06.0562010Z hidden_states = self.wo(hidden_states) 2025-08-14T21:59:06.0562151Z 2025-08-14T21:59:06.0562251Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0562610Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0562918Z return mod(**inputs) 2025-08-14T21:59:06.0563255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0563620Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0563973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0564324Z layer_outputs = layer_module( 2025-08-14T21:59:06.0564665Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0565025Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0565388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0565744Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0566108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0566474Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0566830Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T21:59:06.0567200Z query_states = self.q(hidden_states) 2025-08-14T21:59:06.0567339Z 2025-08-14T21:59:06.0567441Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0567793Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0568103Z return mod(**inputs) 2025-08-14T21:59:06.0568443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0568811Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0569162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0569555Z layer_outputs = layer_module( 2025-08-14T21:59:06.0569900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0570253Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0570644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0571025Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0571403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0571809Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0572182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T21:59:06.0572556Z key_states = self.k(current_states) 2025-08-14T21:59:06.0572689Z 2025-08-14T21:59:06.0572804Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0573159Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0573488Z return mod(**inputs) 2025-08-14T21:59:06.0573841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0574219Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0574580Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0574966Z layer_outputs = layer_module( 2025-08-14T21:59:06.0575339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0575719Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0576113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0576515Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0576911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0577307Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0577705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T21:59:06.0578154Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:59:06.0578346Z 2025-08-14T21:59:06.0578455Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0578838Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0579185Z return mod(**inputs) 2025-08-14T21:59:06.0579729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0580139Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0580558Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0580953Z layer_outputs = layer_module( 2025-08-14T21:59:06.0581316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0581707Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0582098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0582499Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0582889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0583296Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0583693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T21:59:06.0584188Z value_states = self.v(current_states) 2025-08-14T21:59:06.0584337Z 2025-08-14T21:59:06.0584448Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0584828Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0585201Z return mod(**inputs) 2025-08-14T21:59:06.0585572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0585964Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0586372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0586781Z layer_outputs = layer_module( 2025-08-14T21:59:06.0587137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0587519Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0587914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0588300Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0588694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0589092Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0589481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T21:59:06.0589898Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:59:06.0590080Z 2025-08-14T21:59:06.0590189Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0590572Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0590912Z return mod(**inputs) 2025-08-14T21:59:06.0591283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0591653Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0592023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0592387Z layer_outputs = layer_module( 2025-08-14T21:59:06.0592733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0593091Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0593462Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0593829Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0594204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0594578Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0594942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T21:59:06.0595346Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:06.0595518Z 2025-08-14T21:59:06.0595621Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0595976Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0596305Z return mod(**inputs) 2025-08-14T21:59:06.0596670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0597065Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0597447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0597827Z layer_outputs = layer_module( 2025-08-14T21:59:06.0598210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0598576Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0598939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0599342Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0599721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0600104Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0600490Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T21:59:06.0600883Z attn_output = self.o(attn_output) 2025-08-14T21:59:06.0601013Z 2025-08-14T21:59:06.0601127Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0601487Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0601835Z return mod(**inputs) 2025-08-14T21:59:06.0602208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0602617Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0603011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0603417Z layer_outputs = layer_module( 2025-08-14T21:59:06.0603799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0604191Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0604602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:06.0605019Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:06.0605429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:06.0605838Z attention_output = self.EncDecAttention( 2025-08-14T21:59:06.0606237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T21:59:06.0606633Z query_states = self.q(hidden_states) 2025-08-14T21:59:06.0606783Z 2025-08-14T21:59:06.0606901Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0607288Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0607654Z return mod(**inputs) 2025-08-14T21:59:06.0608026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0608421Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0608811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0609213Z layer_outputs = layer_module( 2025-08-14T21:59:06.0609580Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0609957Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0610352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:06.0610754Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:06.0611139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:06.0611541Z attention_output = self.EncDecAttention( 2025-08-14T21:59:06.0611942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T21:59:06.0612340Z key_states = self.k(current_states) 2025-08-14T21:59:06.0612480Z 2025-08-14T21:59:06.0612614Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0613002Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0613346Z return mod(**inputs) 2025-08-14T21:59:06.0613705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0614136Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0614525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0614914Z layer_outputs = layer_module( 2025-08-14T21:59:06.0615343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0615726Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0616117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:06.0616519Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:06.0616906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:06.0617315Z attention_output = self.EncDecAttention( 2025-08-14T21:59:06.0617726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T21:59:06.0618178Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:59:06.0618388Z 2025-08-14T21:59:06.0618502Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0618901Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0619256Z return mod(**inputs) 2025-08-14T21:59:06.0619711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0620132Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0620540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0620939Z layer_outputs = layer_module( 2025-08-14T21:59:06.0621321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0621719Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0622127Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:06.0622525Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:06.0622940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:06.0623356Z attention_output = self.EncDecAttention( 2025-08-14T21:59:06.0623774Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T21:59:06.0624169Z value_states = self.v(current_states) 2025-08-14T21:59:06.0624321Z 2025-08-14T21:59:06.0624430Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0624813Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0625150Z return mod(**inputs) 2025-08-14T21:59:06.0625519Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0625908Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0626303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0626701Z layer_outputs = layer_module( 2025-08-14T21:59:06.0627081Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0627476Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0627899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:06.0628320Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:06.0628716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:06.0629135Z attention_output = self.EncDecAttention( 2025-08-14T21:59:06.0629532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T21:59:06.0629980Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:59:06.0630199Z 2025-08-14T21:59:06.0630320Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0630703Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0631039Z return mod(**inputs) 2025-08-14T21:59:06.0631405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0631799Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0632178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0632570Z layer_outputs = layer_module( 2025-08-14T21:59:06.0632932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0633310Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0633691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:06.0634066Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:06.0634436Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:06.0634866Z attention_output = self.EncDecAttention( 2025-08-14T21:59:06.0635240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T21:59:06.0635645Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:06.0635807Z 2025-08-14T21:59:06.0635917Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0636267Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0636592Z return mod(**inputs) 2025-08-14T21:59:06.0636936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0637310Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0637668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0638035Z layer_outputs = layer_module( 2025-08-14T21:59:06.0638382Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0638735Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0639110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:06.0639488Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:06.0639887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:06.0640277Z attention_output = self.EncDecAttention( 2025-08-14T21:59:06.0640687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T21:59:06.0641084Z attn_output = self.o(attn_output) 2025-08-14T21:59:06.0641222Z 2025-08-14T21:59:06.0641332Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0641737Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0642251Z return mod(**inputs) 2025-08-14T21:59:06.0642636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0643094Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0643501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0643894Z layer_outputs = layer_module( 2025-08-14T21:59:06.0644261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0644710Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0645109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:06.0645530Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:06.0645929Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:06.0646359Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:06.0646768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T21:59:06.0647145Z hidden_states = self.wi(hidden_states) 2025-08-14T21:59:06.0647281Z 2025-08-14T21:59:06.0647384Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0647745Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0648070Z return mod(**inputs) 2025-08-14T21:59:06.0648414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0648790Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0649155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0649522Z layer_outputs = layer_module( 2025-08-14T21:59:06.0649859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0650220Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0650587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:06.0650987Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:06.0651399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:06.0651807Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:06.0652210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T21:59:06.0652587Z hidden_states = self.act(hidden_states) 2025-08-14T21:59:06.0652741Z 2025-08-14T21:59:06.0652850Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0653228Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0653571Z return mod(**inputs) 2025-08-14T21:59:06.0653932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0654321Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0654706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0655091Z layer_outputs = layer_module( 2025-08-14T21:59:06.0655471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0655850Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0656277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:06.0656681Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:06.0657092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:06.0657542Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:06.0657963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T21:59:06.0658360Z hidden_states = self.wo(hidden_states) 2025-08-14T21:59:06.0658511Z 2025-08-14T21:59:06.0658640Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0659040Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0659384Z return mod(**inputs) 2025-08-14T21:59:06.0659825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0660243Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0660645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0661044Z layer_outputs = layer_module( 2025-08-14T21:59:06.0661432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0661818Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0662209Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:06.0662628Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:06.0663041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 343, in forward 2025-08-14T21:59:06.0663502Z hidden_states = hidden_states + self.dropout(forwarded_states) 2025-08-14T21:59:06.0663701Z 2025-08-14T21:59:06.0663816Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0664209Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0664551Z return mod(**inputs) 2025-08-14T21:59:06.0664915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0665317Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0665578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0665655Z layer_outputs = layer_module( 2025-08-14T21:59:06.0665892Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0665985Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0666237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0666324Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0666579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0666669Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0666923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T21:59:06.0667005Z query_states = self.q(hidden_states) 2025-08-14T21:59:06.0667009Z 2025-08-14T21:59:06.0667119Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0667345Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0667415Z return mod(**inputs) 2025-08-14T21:59:06.0667671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0667749Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0668032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0668117Z layer_outputs = layer_module( 2025-08-14T21:59:06.0668348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0668452Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0668709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0668796Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0669082Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0669171Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0669417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T21:59:06.0669511Z key_states = self.k(current_states) 2025-08-14T21:59:06.0669515Z 2025-08-14T21:59:06.0669626Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0669834Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0669911Z return mod(**inputs) 2025-08-14T21:59:06.0670160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0670245Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0670493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0670571Z layer_outputs = layer_module( 2025-08-14T21:59:06.0670817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0670901Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0671159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0671243Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0671489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0671584Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0671826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T21:59:06.0671963Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:59:06.0671969Z 2025-08-14T21:59:06.0672088Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0672294Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0672373Z return mod(**inputs) 2025-08-14T21:59:06.0672623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0672699Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0672957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0673034Z layer_outputs = layer_module( 2025-08-14T21:59:06.0673279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0673366Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0673602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0673691Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0673924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0674023Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0674264Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T21:59:06.0674343Z value_states = self.v(current_states) 2025-08-14T21:59:06.0674364Z 2025-08-14T21:59:06.0674474Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0674671Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0674736Z return mod(**inputs) 2025-08-14T21:59:06.0674979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0675092Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0675329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0675409Z layer_outputs = layer_module( 2025-08-14T21:59:06.0675632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0675720Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0675954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0676038Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0676280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0676361Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0676593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T21:59:06.0676714Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:59:06.0676718Z 2025-08-14T21:59:06.0676832Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0677046Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0677113Z return mod(**inputs) 2025-08-14T21:59:06.0677357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0677439Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0677678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0677758Z layer_outputs = layer_module( 2025-08-14T21:59:06.0677979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0678061Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0678304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0678384Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0678622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0678711Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0678947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T21:59:06.0679067Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:06.0679070Z 2025-08-14T21:59:06.0679172Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0679369Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0679446Z return mod(**inputs) 2025-08-14T21:59:06.0679687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0679768Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0680024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0680096Z layer_outputs = layer_module( 2025-08-14T21:59:06.0680319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0680417Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0680653Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:06.0680742Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:06.0680977Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:06.0682057Z attention_output = self.SelfAttention( 2025-08-14T21:59:06.0682301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T21:59:06.0682377Z attn_output = self.o(attn_output) 2025-08-14T21:59:06.0682381Z 2025-08-14T21:59:06.0682497Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0682693Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0682759Z return mod(**inputs) 2025-08-14T21:59:06.0683002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0683076Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0683315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0683385Z layer_outputs = layer_module( 2025-08-14T21:59:06.0683607Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0683695Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0683923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:06.0684012Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:06.0684240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:06.0684324Z attention_output = self.EncDecAttention( 2025-08-14T21:59:06.0684557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T21:59:06.0684634Z query_states = self.q(hidden_states) 2025-08-14T21:59:06.0684638Z 2025-08-14T21:59:06.0684738Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0684942Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0700335Z return mod(**inputs) 2025-08-14T21:59:06.0700751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0700846Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0701142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0701225Z layer_outputs = layer_module( 2025-08-14T21:59:06.0701482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0701583Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0701857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:06.0701956Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:06.0702233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:06.0702334Z attention_output = self.EncDecAttention( 2025-08-14T21:59:06.0702604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T21:59:06.0702796Z key_states = self.k(current_states) 2025-08-14T21:59:06.0702805Z 2025-08-14T21:59:06.0702940Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0703168Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0703284Z return mod(**inputs) 2025-08-14T21:59:06.0703552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0703634Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0703941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0704052Z layer_outputs = layer_module( 2025-08-14T21:59:06.0704297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0704395Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0704676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:06.0704762Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:06.0705020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:06.0705117Z attention_output = self.EncDecAttention( 2025-08-14T21:59:06.0705386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T21:59:06.0705533Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:59:06.0705541Z 2025-08-14T21:59:06.0705661Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0705898Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0705971Z return mod(**inputs) 2025-08-14T21:59:06.0706251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0706346Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0706597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0706688Z layer_outputs = layer_module( 2025-08-14T21:59:06.0706927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0707011Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0707279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:06.0707366Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:06.0707619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:06.0707710Z attention_output = self.EncDecAttention( 2025-08-14T21:59:06.0707957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T21:59:06.0708051Z value_states = self.v(current_states) 2025-08-14T21:59:06.0708058Z 2025-08-14T21:59:06.0708170Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0708394Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0708464Z return mod(**inputs) 2025-08-14T21:59:06.0708713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0708808Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0709057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0709133Z layer_outputs = layer_module( 2025-08-14T21:59:06.0709393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0709478Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0709733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:06.0709835Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:06.0710077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:06.0710173Z attention_output = self.EncDecAttention( 2025-08-14T21:59:06.0710471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T21:59:06.0710614Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:59:06.0710627Z 2025-08-14T21:59:06.0710736Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0710953Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0711030Z return mod(**inputs) 2025-08-14T21:59:06.0711285Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0711365Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0711628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0711704Z layer_outputs = layer_module( 2025-08-14T21:59:06.0711947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0712032Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0712281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:06.0712374Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:06.0712623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:06.0712711Z attention_output = self.EncDecAttention( 2025-08-14T21:59:06.0712966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T21:59:06.0713087Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:06.0713091Z 2025-08-14T21:59:06.0713207Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0713419Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0713491Z return mod(**inputs) 2025-08-14T21:59:06.0713756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0713835Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0714086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0714168Z layer_outputs = layer_module( 2025-08-14T21:59:06.0714403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0714496Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0714741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:06.0714824Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:06.0715079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:06.0715171Z attention_output = self.EncDecAttention( 2025-08-14T21:59:06.0715426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T21:59:06.0715508Z attn_output = self.o(attn_output) 2025-08-14T21:59:06.0715511Z 2025-08-14T21:59:06.0715635Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0715855Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0715934Z return mod(**inputs) 2025-08-14T21:59:06.0716180Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0716260Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0716486Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0716579Z layer_outputs = layer_module( 2025-08-14T21:59:06.0716805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0716882Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0717114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:06.0717204Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:06.0717427Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:06.0717551Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:06.0717773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T21:59:06.0717858Z hidden_states = self.wi(hidden_states) 2025-08-14T21:59:06.0717861Z 2025-08-14T21:59:06.0717962Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0718159Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0718231Z return mod(**inputs) 2025-08-14T21:59:06.0718457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0718538Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0718764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0718836Z layer_outputs = layer_module( 2025-08-14T21:59:06.0719057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0719133Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0719357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:06.0719459Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:06.0719685Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:06.0719807Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:06.0720033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T21:59:06.0720114Z hidden_states = self.act(hidden_states) 2025-08-14T21:59:06.0720118Z 2025-08-14T21:59:06.0720229Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0720424Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0720496Z return mod(**inputs) 2025-08-14T21:59:06.0720724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:06.0720795Z decoder_outputs = self.decoder( 2025-08-14T21:59:06.0721034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:06.0721104Z layer_outputs = layer_module( 2025-08-14T21:59:06.0721318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:06.0721420Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:06.0721647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:06.0721742Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:06.0721990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:06.0722102Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:06.0722344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T21:59:06.0722471Z hidden_states = self.wo(hidden_states) 2025-08-14T21:59:06.0722475Z 2025-08-14T21:59:06.0722587Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0722786Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0722852Z return mod(**inputs) 2025-08-14T21:59:06.0723098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1791, in forward 2025-08-14T21:59:06.0723183Z lm_logits = self.lm_head(sequence_output) 2025-08-14T21:59:06.0723188Z 2025-08-14T21:59:06.0723289Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:06.0723493Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:06.0723558Z return mod(**inputs) 2025-08-14T21:59:06.0723800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1798, in forward 2025-08-14T21:59:06.0723946Z loss = loss_fct(lm_logits.view(-1, lm_logits.size(-1)), labels.view(-1)) 2025-08-14T21:59:06.0723949Z 2025-08-14T21:59:16.7082704Z Compilation time (from dynamo_timed): 21.546190183 2025-08-14T21:59:16.7232397Z pass 2025-08-14T21:59:16.7232966Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:59:16.7233840Z TIMING: _recursive_pre_grad_passes:0.0643 _recursive_joint_graph_passes:0.59088 _recursive_post_grad_passes:0.19817 async_compile.wait:0.86251 code_gen:10.0149 inductor_compile:12.51174 backend_compile:18.49101 gc:0.00063 entire_frame_compile:21.54619 total_wall_time:21.54619 2025-08-14T21:59:16.7234944Z STATS: call_* op count: 810 | FakeTensorMode.__torch_dispatch__:34635 | FakeTensor.__torch_dispatch__:5221 | ProxyTorchDispatchMode.__torch_dispatch__:8556 2025-08-14T21:59:16.7235483Z Dynamo produced 1 graphs covering 810 ops with 0 graph breaks (0 unique) 2025-08-14T21:59:22.7807109Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:59:22.7808115Z from pkg_resources import resource_filename 2025-08-14T21:59:23.4602070Z 2025-08-14T21:59:24.7274024Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:59:24.7279125Z loading model: 0it [00:01, ?it/s] 2025-08-14T21:59:24.7289800Z cpu eval T5Small 2025-08-14T21:59:26.1701636Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:59:26.5706416Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:59:27.0359399Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:59:39.4583916Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4584511Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4584892Z return mod(**inputs) 2025-08-14T21:59:39.4585737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.4586387Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.4586820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4587462Z layer_outputs = layer_module( 2025-08-14T21:59:39.4587863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4588380Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4588951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4589541Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4590035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4590543Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4591011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 546, in forward 2025-08-14T21:59:39.4591487Z position_bias = position_bias + causal_mask 2025-08-14T21:59:39.4591653Z 2025-08-14T21:59:39.4591815Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4592310Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4592711Z return mod(**inputs) 2025-08-14T21:59:39.4593152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.4593658Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.4594078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4594630Z layer_outputs = layer_module( 2025-08-14T21:59:39.4595023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4595521Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4596008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4596501Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4596934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4597456Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4597881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T21:59:39.4598364Z query_states = self.q(hidden_states) 2025-08-14T21:59:39.4598521Z 2025-08-14T21:59:39.4598642Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4599138Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4599538Z return mod(**inputs) 2025-08-14T21:59:39.4599968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.4600430Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.4600861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4601365Z layer_outputs = layer_module( 2025-08-14T21:59:39.4601815Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4602271Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4602706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4603197Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4603675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4604266Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4604672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T21:59:39.4605224Z key_states = self.k(current_states) 2025-08-14T21:59:39.4605381Z 2025-08-14T21:59:39.4605497Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4606020Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4606409Z return mod(**inputs) 2025-08-14T21:59:39.4606884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.4607328Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.4607833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4608236Z layer_outputs = layer_module( 2025-08-14T21:59:39.4608701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4609104Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4609521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4610000Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4610501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4610913Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4611396Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T21:59:39.4611896Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:59:39.4612149Z 2025-08-14T21:59:39.4612272Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4612706Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4613099Z return mod(**inputs) 2025-08-14T21:59:39.4613499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.4613907Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.4614387Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4614792Z layer_outputs = layer_module( 2025-08-14T21:59:39.4615333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4616776Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4617429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4617889Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4618393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4618851Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4619297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T21:59:39.4620051Z value_states = self.v(current_states) 2025-08-14T21:59:39.4620206Z 2025-08-14T21:59:39.4620333Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4620731Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4621126Z return mod(**inputs) 2025-08-14T21:59:39.4621600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.4622002Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.4622565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4623074Z layer_outputs = layer_module( 2025-08-14T21:59:39.4623460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4624042Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4624459Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4624972Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4625436Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4625915Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4626420Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T21:59:39.4626874Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:59:39.4627057Z 2025-08-14T21:59:39.4627173Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4627581Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4628022Z return mod(**inputs) 2025-08-14T21:59:39.4628464Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.4629020Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.4629434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4629972Z layer_outputs = layer_module( 2025-08-14T21:59:39.4630349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4630749Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4631184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4631672Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4632156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4632596Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4633105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T21:59:39.4633571Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:39.4633829Z 2025-08-14T21:59:39.4633946Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4634346Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4634706Z return mod(**inputs) 2025-08-14T21:59:39.4635095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.4635507Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.4635980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4636402Z layer_outputs = layer_module( 2025-08-14T21:59:39.4636930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4637334Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4637835Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4638242Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4638736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4639192Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4639674Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T21:59:39.4640077Z attn_output = self.o(attn_output) 2025-08-14T21:59:39.4640226Z 2025-08-14T21:59:39.4640342Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4640766Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4641127Z return mod(**inputs) 2025-08-14T21:59:39.4641501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4642325Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4642814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4643221Z layer_outputs = layer_module( 2025-08-14T21:59:39.4643614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4644011Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4644422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4644836Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4645248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4645667Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4646072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T21:59:39.4646493Z value_states = self.v(current_states) 2025-08-14T21:59:39.4646652Z 2025-08-14T21:59:39.4646770Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4647168Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4647523Z return mod(**inputs) 2025-08-14T21:59:39.4647911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4648326Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4648728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4649138Z layer_outputs = layer_module( 2025-08-14T21:59:39.4649523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4649919Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4650325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4650743Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4651156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4651573Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4651978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T21:59:39.4652393Z query_states = self.q(hidden_states) 2025-08-14T21:59:39.4652543Z 2025-08-14T21:59:39.4652665Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4653057Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4653423Z return mod(**inputs) 2025-08-14T21:59:39.4653814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4654226Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4654626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4655039Z layer_outputs = layer_module( 2025-08-14T21:59:39.4655472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4655866Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4656279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4656724Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4657136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4657538Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4658020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T21:59:39.4658436Z key_states = self.k(current_states) 2025-08-14T21:59:39.4658582Z 2025-08-14T21:59:39.4658703Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4659094Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4659454Z return mod(**inputs) 2025-08-14T21:59:39.4659926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4660332Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4660731Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4661140Z layer_outputs = layer_module( 2025-08-14T21:59:39.4661526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4661919Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4662332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4662753Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4663153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4663574Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4663983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T21:59:39.4664445Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:59:39.4664644Z 2025-08-14T21:59:39.4664758Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4665155Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4665514Z return mod(**inputs) 2025-08-14T21:59:39.4665904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4666311Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4666715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4667128Z layer_outputs = layer_module( 2025-08-14T21:59:39.4667496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4667890Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4668337Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4668756Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4669159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4669568Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4669989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T21:59:39.4670461Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:59:39.4670649Z 2025-08-14T21:59:39.4670762Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4671155Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4672424Z return mod(**inputs) 2025-08-14T21:59:39.4672812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4673229Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4673642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4674106Z layer_outputs = layer_module( 2025-08-14T21:59:39.4674485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4674885Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4675294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4675709Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4676123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4676538Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4676958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T21:59:39.4677413Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:39.4677592Z 2025-08-14T21:59:39.4677709Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4678108Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4678464Z return mod(**inputs) 2025-08-14T21:59:39.4678847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4679242Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4679644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4680050Z layer_outputs = layer_module( 2025-08-14T21:59:39.4680427Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4680818Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4681222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4681639Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4682037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4682455Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4682863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T21:59:39.4683269Z attn_output = self.o(attn_output) 2025-08-14T21:59:39.4683410Z 2025-08-14T21:59:39.4683523Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4683914Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4684265Z return mod(**inputs) 2025-08-14T21:59:39.4684635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.4685043Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.4685442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4685847Z layer_outputs = layer_module( 2025-08-14T21:59:39.4686247Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4686640Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4687047Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:39.4687480Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:39.4687894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:39.4688314Z attention_output = self.EncDecAttention( 2025-08-14T21:59:39.4688729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T21:59:39.4689177Z query_states = self.q(hidden_states) 2025-08-14T21:59:39.4689336Z 2025-08-14T21:59:39.4689453Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4689848Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4690209Z return mod(**inputs) 2025-08-14T21:59:39.4690581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4690989Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4691397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4691797Z layer_outputs = layer_module( 2025-08-14T21:59:39.4692174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4692568Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4692977Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:39.4693398Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:39.4693819Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:39.4694270Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:39.4694707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T21:59:39.4695118Z hidden_states = self.wi(hidden_states) 2025-08-14T21:59:39.4695272Z 2025-08-14T21:59:39.4695383Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4695775Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4696118Z return mod(**inputs) 2025-08-14T21:59:39.4696502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4696918Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4697317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4697725Z layer_outputs = layer_module( 2025-08-14T21:59:39.4698102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4698502Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4698904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:39.4699334Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:39.4699945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:39.4700413Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:39.4700853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T21:59:39.4701266Z hidden_states = self.act(hidden_states) 2025-08-14T21:59:39.4701418Z 2025-08-14T21:59:39.4701585Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4701977Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4702329Z return mod(**inputs) 2025-08-14T21:59:39.4702704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4703136Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4703527Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4703932Z layer_outputs = layer_module( 2025-08-14T21:59:39.4704362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4704760Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4705166Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:39.4705591Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:39.4706005Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:39.4706449Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:39.4706897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T21:59:39.4707326Z hidden_states = self.wo(hidden_states) 2025-08-14T21:59:39.4707473Z 2025-08-14T21:59:39.4707594Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4707982Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4708340Z return mod(**inputs) 2025-08-14T21:59:39.4708717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4709115Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4709516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4709921Z layer_outputs = layer_module( 2025-08-14T21:59:39.4710299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4710691Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4711100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4711510Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4711913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4712331Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4712739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T21:59:39.4713150Z query_states = self.q(hidden_states) 2025-08-14T21:59:39.4713299Z 2025-08-14T21:59:39.4713883Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4714279Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4714637Z return mod(**inputs) 2025-08-14T21:59:39.4715015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4715408Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4715807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4716207Z layer_outputs = layer_module( 2025-08-14T21:59:39.4716574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4716968Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4717405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4717828Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4718237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4718673Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4719085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T21:59:39.4719509Z key_states = self.k(current_states) 2025-08-14T21:59:39.4719694Z 2025-08-14T21:59:39.4719830Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4720229Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4720580Z return mod(**inputs) 2025-08-14T21:59:39.4720954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4721359Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4721757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4722161Z layer_outputs = layer_module( 2025-08-14T21:59:39.4722531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4722926Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4723326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4723743Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4724143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4724563Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4724966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T21:59:39.4725421Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:59:39.4725626Z 2025-08-14T21:59:39.4725741Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4726137Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4726493Z return mod(**inputs) 2025-08-14T21:59:39.4726865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4727275Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4727683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4728089Z layer_outputs = layer_module( 2025-08-14T21:59:39.4728466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4728870Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4729282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4729696Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4730101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4730524Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4730931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T21:59:39.4731349Z value_states = self.v(current_states) 2025-08-14T21:59:39.4731503Z 2025-08-14T21:59:39.4731615Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4732005Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4732383Z return mod(**inputs) 2025-08-14T21:59:39.4732779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4733195Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4733622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4734037Z layer_outputs = layer_module( 2025-08-14T21:59:39.4734422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4734862Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4735278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4735708Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4736129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4736553Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4736958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T21:59:39.4737409Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:59:39.4737588Z 2025-08-14T21:59:39.4737707Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4738092Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4738452Z return mod(**inputs) 2025-08-14T21:59:39.4738838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4739249Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4739742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4740162Z layer_outputs = layer_module( 2025-08-14T21:59:39.4740544Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4740940Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4741349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4742008Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4742439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4742853Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4743268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T21:59:39.4743726Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:39.4743908Z 2025-08-14T21:59:39.4744033Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4744426Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4744787Z return mod(**inputs) 2025-08-14T21:59:39.4745176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4745581Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4745990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4746394Z layer_outputs = layer_module( 2025-08-14T21:59:39.4746812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4747205Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4747614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4748125Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4748534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4748951Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4749399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T21:59:39.4749814Z attn_output = self.o(attn_output) 2025-08-14T21:59:39.4749958Z 2025-08-14T21:59:39.4750075Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4750544Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4750930Z return mod(**inputs) 2025-08-14T21:59:39.4751303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4751714Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4752114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4752518Z layer_outputs = layer_module( 2025-08-14T21:59:39.4752891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4753288Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4753694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:39.4754119Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:39.4754533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:39.4754983Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:39.4755426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T21:59:39.4755824Z hidden_states = self.wi(hidden_states) 2025-08-14T21:59:39.4755979Z 2025-08-14T21:59:39.4756092Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4756483Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4756843Z return mod(**inputs) 2025-08-14T21:59:39.4757215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4757622Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4758027Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4758424Z layer_outputs = layer_module( 2025-08-14T21:59:39.4758801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4759196Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4759601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:39.4760013Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:39.4760427Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:39.4760873Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:39.4761316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T21:59:39.4761721Z hidden_states = self.act(hidden_states) 2025-08-14T21:59:39.4761879Z 2025-08-14T21:59:39.4761992Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4762383Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4762734Z return mod(**inputs) 2025-08-14T21:59:39.4763152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4763562Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4763966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4764388Z layer_outputs = layer_module( 2025-08-14T21:59:39.4764770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4765171Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4765592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:39.4766036Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:39.4766454Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:39.4766912Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:39.4767347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T21:59:39.4767764Z hidden_states = self.wo(hidden_states) 2025-08-14T21:59:39.4767915Z 2025-08-14T21:59:39.4768037Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4768431Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4768781Z return mod(**inputs) 2025-08-14T21:59:39.4769160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4769565Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4769962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4770371Z layer_outputs = layer_module( 2025-08-14T21:59:39.4770751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4771155Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4771551Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4771974Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4772383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4772792Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4773215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T21:59:39.4773630Z query_states = self.q(hidden_states) 2025-08-14T21:59:39.4773777Z 2025-08-14T21:59:39.4773901Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4774291Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4774651Z return mod(**inputs) 2025-08-14T21:59:39.4775031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4775437Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4775851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4776259Z layer_outputs = layer_module( 2025-08-14T21:59:39.4776637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4777033Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4777448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4777863Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4778311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4778727Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4779133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T21:59:39.4779679Z key_states = self.k(current_states) 2025-08-14T21:59:39.4779830Z 2025-08-14T21:59:39.4779946Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4780346Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4780735Z return mod(**inputs) 2025-08-14T21:59:39.4781145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4781550Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4781956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4782360Z layer_outputs = layer_module( 2025-08-14T21:59:39.4782733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4783137Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4783547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4783973Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4784376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4784797Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4785205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T21:59:39.4785678Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:59:39.4785879Z 2025-08-14T21:59:39.4785994Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4786393Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4786759Z return mod(**inputs) 2025-08-14T21:59:39.4787134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4787542Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4787945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4788356Z layer_outputs = layer_module( 2025-08-14T21:59:39.4788729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4789129Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4789534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4789939Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4790345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4790761Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4791170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T21:59:39.4791571Z value_states = self.v(current_states) 2025-08-14T21:59:39.4791727Z 2025-08-14T21:59:39.4791846Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4792243Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4792590Z return mod(**inputs) 2025-08-14T21:59:39.4792971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4793407Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4793806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4794196Z layer_outputs = layer_module( 2025-08-14T21:59:39.4794601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4794998Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4795403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4795825Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4796255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4796665Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4797064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T21:59:39.4797502Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:59:39.4797688Z 2025-08-14T21:59:39.4797801Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4798196Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4798543Z return mod(**inputs) 2025-08-14T21:59:39.4798921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4799326Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4799722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4800134Z layer_outputs = layer_module( 2025-08-14T21:59:39.4800510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4800906Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4801306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4801716Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4802127Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4802546Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4802943Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T21:59:39.4803387Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:39.4803565Z 2025-08-14T21:59:39.4803685Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4804070Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4804429Z return mod(**inputs) 2025-08-14T21:59:39.4804810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4805232Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4805626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4806031Z layer_outputs = layer_module( 2025-08-14T21:59:39.4806409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4806800Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4807211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4807620Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4808025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4808458Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4808870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T21:59:39.4809293Z attn_output = self.o(attn_output) 2025-08-14T21:59:39.4809458Z 2025-08-14T21:59:39.4809579Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4809964Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4810320Z return mod(**inputs) 2025-08-14T21:59:39.4810707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4811149Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4811569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4811989Z layer_outputs = layer_module( 2025-08-14T21:59:39.4812377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4812776Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4813194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:39.4813634Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:39.4814060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:39.4814524Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:39.4814974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T21:59:39.4815395Z hidden_states = self.wi(hidden_states) 2025-08-14T21:59:39.4815544Z 2025-08-14T21:59:39.4815661Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4816063Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4816422Z return mod(**inputs) 2025-08-14T21:59:39.4816812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4817231Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4817652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4818067Z layer_outputs = layer_module( 2025-08-14T21:59:39.4818444Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4818850Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4819263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:39.4819788Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:39.4820206Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:39.4820657Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:39.4821104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T21:59:39.4821511Z hidden_states = self.act(hidden_states) 2025-08-14T21:59:39.4821669Z 2025-08-14T21:59:39.4821782Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4822180Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4822540Z return mod(**inputs) 2025-08-14T21:59:39.4822912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4823323Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4823833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4824236Z layer_outputs = layer_module( 2025-08-14T21:59:39.4824620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4825047Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4825455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:39.4825908Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:39.4826349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:39.4826818Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:39.4827265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T21:59:39.4827673Z hidden_states = self.wo(hidden_states) 2025-08-14T21:59:39.4827828Z 2025-08-14T21:59:39.4827941Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4828337Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4828690Z return mod(**inputs) 2025-08-14T21:59:39.4829070Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4829476Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4829873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4830275Z layer_outputs = layer_module( 2025-08-14T21:59:39.4830658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4831056Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4831455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4831867Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4832274Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4832690Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4833091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T21:59:39.4833494Z query_states = self.q(hidden_states) 2025-08-14T21:59:39.4833641Z 2025-08-14T21:59:39.4833766Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4834158Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4834503Z return mod(**inputs) 2025-08-14T21:59:39.4834884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4835289Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4835679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4836080Z layer_outputs = layer_module( 2025-08-14T21:59:39.4836460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4836852Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4837247Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4837663Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4838070Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4838473Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4838922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T21:59:39.4839327Z key_states = self.k(current_states) 2025-08-14T21:59:39.4839472Z 2025-08-14T21:59:39.4839593Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4840040Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4840399Z return mod(**inputs) 2025-08-14T21:59:39.4840778Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4841175Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4841620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4842285Z layer_outputs = layer_module( 2025-08-14T21:59:39.4842676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4843077Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4843490Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4843918Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4844334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4844742Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4845155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T21:59:39.4845628Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:59:39.4845830Z 2025-08-14T21:59:39.4845944Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4846344Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4846699Z return mod(**inputs) 2025-08-14T21:59:39.4847083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4847487Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4847893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4848307Z layer_outputs = layer_module( 2025-08-14T21:59:39.4848680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4849084Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4849499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4849917Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4850329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4850755Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4851167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T21:59:39.4851588Z value_states = self.v(current_states) 2025-08-14T21:59:39.4851737Z 2025-08-14T21:59:39.4851851Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4852250Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4852603Z return mod(**inputs) 2025-08-14T21:59:39.4852989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4853413Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4853824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4854239Z layer_outputs = layer_module( 2025-08-14T21:59:39.4854688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4855091Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4855498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4855950Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4856362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4856787Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4857257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T21:59:39.4857704Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:59:39.4857889Z 2025-08-14T21:59:39.4858004Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4858401Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4858745Z return mod(**inputs) 2025-08-14T21:59:39.4859134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4859620Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4860043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4860447Z layer_outputs = layer_module( 2025-08-14T21:59:39.4860830Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4861225Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4861629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4862041Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4862448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4862857Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4863254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T21:59:39.4863709Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:39.4863893Z 2025-08-14T21:59:39.4864004Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4864396Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4864742Z return mod(**inputs) 2025-08-14T21:59:39.4865118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4865527Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4865915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4866313Z layer_outputs = layer_module( 2025-08-14T21:59:39.4866692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4867085Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4867483Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4867900Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4868305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4868723Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4869120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T21:59:39.4869555Z attn_output = self.o(attn_output) 2025-08-14T21:59:39.4869701Z 2025-08-14T21:59:39.4869819Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4870209Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4870589Z return mod(**inputs) 2025-08-14T21:59:39.4870963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4871368Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4871760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4872196Z layer_outputs = layer_module( 2025-08-14T21:59:39.4872582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4872970Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4873375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4873791Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4874192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 609, in forward 2025-08-14T21:59:39.4874651Z hidden_states = hidden_states + self.dropout(attention_output[0]) 2025-08-14T21:59:39.4874859Z 2025-08-14T21:59:39.4874973Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4875366Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4875720Z return mod(**inputs) 2025-08-14T21:59:39.4876092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4876499Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4876899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4877295Z layer_outputs = layer_module( 2025-08-14T21:59:39.4877674Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4878070Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4878475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:39.4878893Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:39.4879311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:39.4879764Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:39.4880201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T21:59:39.4880611Z hidden_states = self.wi(hidden_states) 2025-08-14T21:59:39.4880766Z 2025-08-14T21:59:39.4880882Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4881276Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4881620Z return mod(**inputs) 2025-08-14T21:59:39.4882001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4882405Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4882793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4883195Z layer_outputs = layer_module( 2025-08-14T21:59:39.4883571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4883972Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4884393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:39.4884818Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:39.4885239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:39.4885714Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:39.4886149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T21:59:39.4886564Z hidden_states = self.act(hidden_states) 2025-08-14T21:59:39.4886713Z 2025-08-14T21:59:39.4886854Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4887283Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4887681Z return mod(**inputs) 2025-08-14T21:59:39.4888181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4888758Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4889153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4889561Z layer_outputs = layer_module( 2025-08-14T21:59:39.4889951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4890342Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4890753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:39.4891181Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:39.4891611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:39.4892062Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:39.4892524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T21:59:39.4892945Z hidden_states = self.wo(hidden_states) 2025-08-14T21:59:39.4893093Z 2025-08-14T21:59:39.4893213Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4893611Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4893965Z return mod(**inputs) 2025-08-14T21:59:39.4894358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4894769Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4895193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4895609Z layer_outputs = layer_module( 2025-08-14T21:59:39.4896001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4896404Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4896816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4897245Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4897654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4898082Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4898507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T21:59:39.4898986Z query_states = self.q(hidden_states) 2025-08-14T21:59:39.4899178Z 2025-08-14T21:59:39.4899337Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4899979Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4900344Z return mod(**inputs) 2025-08-14T21:59:39.4900775Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4901208Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4901608Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4902042Z layer_outputs = layer_module( 2025-08-14T21:59:39.4902430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4902955Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4903413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4903828Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4904224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4904635Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4905038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T21:59:39.4905443Z key_states = self.k(current_states) 2025-08-14T21:59:39.4905599Z 2025-08-14T21:59:39.4905711Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4906102Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4906455Z return mod(**inputs) 2025-08-14T21:59:39.4906827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4907232Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4907628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4908035Z layer_outputs = layer_module( 2025-08-14T21:59:39.4908415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4908803Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4909204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4909606Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4910007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4910422Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4910826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T21:59:39.4911290Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:59:39.4911493Z 2025-08-14T21:59:39.4911605Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4911999Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4912343Z return mod(**inputs) 2025-08-14T21:59:39.4912717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4913122Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4913518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4913912Z layer_outputs = layer_module( 2025-08-14T21:59:39.4914289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4914684Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4915078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4915491Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4915922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4916340Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4916738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T21:59:39.4917170Z value_states = self.v(current_states) 2025-08-14T21:59:39.4917316Z 2025-08-14T21:59:39.4917437Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4917835Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4918221Z return mod(**inputs) 2025-08-14T21:59:39.4918598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4919002Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4919389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4919791Z layer_outputs = layer_module( 2025-08-14T21:59:39.4920177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4920569Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4920968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4921377Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4921786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4922189Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4922594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T21:59:39.4923043Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:59:39.4923220Z 2025-08-14T21:59:39.4923342Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4923730Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4924088Z return mod(**inputs) 2025-08-14T21:59:39.4924465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4924860Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4925256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4925662Z layer_outputs = layer_module( 2025-08-14T21:59:39.4926039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4926432Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4926834Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4927240Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4927641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4928047Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4928449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T21:59:39.4928885Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:39.4929063Z 2025-08-14T21:59:39.4929178Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4929572Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4929923Z return mod(**inputs) 2025-08-14T21:59:39.4930324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4930722Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4931120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4931540Z layer_outputs = layer_module( 2025-08-14T21:59:39.4931911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4932321Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4932742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4933201Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4933607Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4934023Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4934440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T21:59:39.4934855Z attn_output = self.o(attn_output) 2025-08-14T21:59:39.4935001Z 2025-08-14T21:59:39.4935116Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4935512Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4935874Z return mod(**inputs) 2025-08-14T21:59:39.4936256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4936669Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4937089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4937504Z layer_outputs = layer_module( 2025-08-14T21:59:39.4937872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4938269Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4938678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:39.4939099Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:39.4939611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:39.4940080Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:39.4940532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T21:59:39.4940947Z hidden_states = self.wi(hidden_states) 2025-08-14T21:59:39.4941105Z 2025-08-14T21:59:39.4941220Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4941617Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4942204Z return mod(**inputs) 2025-08-14T21:59:39.4942583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4942992Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4943397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4943797Z layer_outputs = layer_module( 2025-08-14T21:59:39.4944181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4944587Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4944994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:39.4945409Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:39.4945911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:39.4946363Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:39.4946801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T21:59:39.4947246Z hidden_states = self.act(hidden_states) 2025-08-14T21:59:39.4947403Z 2025-08-14T21:59:39.4947517Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4947911Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4948256Z return mod(**inputs) 2025-08-14T21:59:39.4948712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4949123Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4949518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4949925Z layer_outputs = layer_module( 2025-08-14T21:59:39.4950304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4950706Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4951105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:39.4951526Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:39.4951946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:39.4952395Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:39.4952829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T21:59:39.4953240Z hidden_states = self.wo(hidden_states) 2025-08-14T21:59:39.4953387Z 2025-08-14T21:59:39.4953511Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4953894Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4954256Z return mod(**inputs) 2025-08-14T21:59:39.4954632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4955042Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4955439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4955844Z layer_outputs = layer_module( 2025-08-14T21:59:39.4956228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4956616Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4957021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4957430Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4957837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4958240Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4958651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T21:59:39.4959058Z query_states = self.q(hidden_states) 2025-08-14T21:59:39.4959205Z 2025-08-14T21:59:39.4959326Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4959713Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4960071Z return mod(**inputs) 2025-08-14T21:59:39.4960455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4960847Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4961280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4961684Z layer_outputs = layer_module( 2025-08-14T21:59:39.4962059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4962470Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4962883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4963297Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4963743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4964154Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4964571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T21:59:39.4964982Z key_states = self.k(current_states) 2025-08-14T21:59:39.4965128Z 2025-08-14T21:59:39.4965243Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4965640Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4966000Z return mod(**inputs) 2025-08-14T21:59:39.4966373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4966779Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4967186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4967596Z layer_outputs = layer_module( 2025-08-14T21:59:39.4967974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4968380Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4968787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4969198Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4969599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4970013Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4970422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T21:59:39.4970887Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:59:39.4971099Z 2025-08-14T21:59:39.4971214Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4971609Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4971970Z return mod(**inputs) 2025-08-14T21:59:39.4972349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4972758Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4973162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4973562Z layer_outputs = layer_module( 2025-08-14T21:59:39.4973942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4974337Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4974755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4975170Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4975580Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4976027Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4976473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T21:59:39.4976885Z value_states = self.v(current_states) 2025-08-14T21:59:39.4977041Z 2025-08-14T21:59:39.4977171Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4977568Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4977914Z return mod(**inputs) 2025-08-14T21:59:39.4978306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4978825Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4979238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4979724Z layer_outputs = layer_module( 2025-08-14T21:59:39.4980104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4980509Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4980914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4981330Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4981748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4982166Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4982576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T21:59:39.4983035Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:59:39.4983211Z 2025-08-14T21:59:39.4983334Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4983726Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4984073Z return mod(**inputs) 2025-08-14T21:59:39.4984448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4984851Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4985247Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4985645Z layer_outputs = layer_module( 2025-08-14T21:59:39.4986021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4986415Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4986811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4987219Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4987622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4988026Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4988430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T21:59:39.4988876Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:39.4989053Z 2025-08-14T21:59:39.4989174Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4989556Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4989911Z return mod(**inputs) 2025-08-14T21:59:39.4990302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4990705Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4991104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4991532Z layer_outputs = layer_module( 2025-08-14T21:59:39.4991927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4992329Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4992752Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4993167Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4993575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.4994028Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.4994436Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T21:59:39.4994842Z attn_output = self.o(attn_output) 2025-08-14T21:59:39.4994983Z 2025-08-14T21:59:39.4995099Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.4995495Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.4995849Z return mod(**inputs) 2025-08-14T21:59:39.4996231Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.4996636Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.4997039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.4997439Z layer_outputs = layer_module( 2025-08-14T21:59:39.4997817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.4998212Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.4998617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.4999029Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.4999429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 609, in forward 2025-08-14T21:59:39.4999894Z hidden_states = hidden_states + self.dropout(attention_output[0]) 2025-08-14T21:59:39.5000099Z 2025-08-14T21:59:39.5000222Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5000611Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5000956Z return mod(**inputs) 2025-08-14T21:59:39.5001335Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.5001743Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.5002132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5002536Z layer_outputs = layer_module( 2025-08-14T21:59:39.5002917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5003318Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5003573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:39.5003674Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:39.5003934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:39.5004063Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:39.5004320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T21:59:39.5004416Z hidden_states = self.wi(hidden_states) 2025-08-14T21:59:39.5004420Z 2025-08-14T21:59:39.5004531Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5004778Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5004851Z return mod(**inputs) 2025-08-14T21:59:39.5005109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.5005233Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.5005486Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5005574Z layer_outputs = layer_module( 2025-08-14T21:59:39.5005830Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5005934Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5006200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:39.5006301Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:39.5006562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:39.5006697Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:39.5006956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T21:59:39.5007049Z hidden_states = self.act(hidden_states) 2025-08-14T21:59:39.5007054Z 2025-08-14T21:59:39.5007167Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5007389Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5007471Z return mod(**inputs) 2025-08-14T21:59:39.5007730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-08-14T21:59:39.5007818Z encoder_outputs = self.encoder( 2025-08-14T21:59:39.5008081Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5008160Z layer_outputs = layer_module( 2025-08-14T21:59:39.5008411Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5008499Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5008759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:39.5008865Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:39.5009126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:39.5009260Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:39.5009517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T21:59:39.5009605Z hidden_states = self.wo(hidden_states) 2025-08-14T21:59:39.5009609Z 2025-08-14T21:59:39.5009731Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5009948Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5010021Z return mod(**inputs) 2025-08-14T21:59:39.5010292Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5010373Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5010644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5010724Z layer_outputs = layer_module( 2025-08-14T21:59:39.5010969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5011060Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5011349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:39.5011446Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:39.5011700Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:39.5011811Z attention_output = self.EncDecAttention( 2025-08-14T21:59:39.5012068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T21:59:39.5012152Z key_states = self.k(current_states) 2025-08-14T21:59:39.5012182Z 2025-08-14T21:59:39.5012314Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5012541Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5012613Z return mod(**inputs) 2025-08-14T21:59:39.5012878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5012958Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5013215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5013305Z layer_outputs = layer_module( 2025-08-14T21:59:39.5013544Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5013628Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5013887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:39.5013978Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:39.5014238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:39.5014330Z attention_output = self.EncDecAttention( 2025-08-14T21:59:39.5014592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T21:59:39.5014740Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:59:39.5014744Z 2025-08-14T21:59:39.5014858Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5015082Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5015153Z return mod(**inputs) 2025-08-14T21:59:39.5015424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5015516Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5015791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5015869Z layer_outputs = layer_module( 2025-08-14T21:59:39.5016119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5016203Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5016475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:39.5016564Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:39.5016823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:39.5016919Z attention_output = self.EncDecAttention( 2025-08-14T21:59:39.5017181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T21:59:39.5017268Z value_states = self.v(current_states) 2025-08-14T21:59:39.5017280Z 2025-08-14T21:59:39.5017392Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5017607Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5017709Z return mod(**inputs) 2025-08-14T21:59:39.5017985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5018064Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5018360Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5018438Z layer_outputs = layer_module( 2025-08-14T21:59:39.5018682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5018802Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5019062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:39.5019158Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:39.5019416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:39.5019582Z attention_output = self.EncDecAttention( 2025-08-14T21:59:39.5019874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T21:59:39.5019999Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:59:39.5020004Z 2025-08-14T21:59:39.5020125Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5020343Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5020415Z return mod(**inputs) 2025-08-14T21:59:39.5020683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5020763Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5021037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5021118Z layer_outputs = layer_module( 2025-08-14T21:59:39.5021358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5021452Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5021707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:39.5021794Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:39.5022066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:39.5022159Z attention_output = self.EncDecAttention( 2025-08-14T21:59:39.5022421Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T21:59:39.5022541Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:39.5022545Z 2025-08-14T21:59:39.5022659Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5022882Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5022953Z return mod(**inputs) 2025-08-14T21:59:39.5023307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5023427Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5023865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5023982Z layer_outputs = layer_module( 2025-08-14T21:59:39.5024232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5024317Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5024702Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:39.5024890Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:39.5025278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:39.5025372Z attention_output = self.EncDecAttention( 2025-08-14T21:59:39.5025654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T21:59:39.5025747Z attn_output = self.o(attn_output) 2025-08-14T21:59:39.5025751Z 2025-08-14T21:59:39.5025865Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5026129Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5026213Z return mod(**inputs) 2025-08-14T21:59:39.5026476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5026566Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5026822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5026901Z layer_outputs = layer_module( 2025-08-14T21:59:39.5027152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5027237Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5027489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:39.5027596Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:39.5027850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:39.5027983Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:39.5028237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T21:59:39.5028323Z hidden_states = self.wi(hidden_states) 2025-08-14T21:59:39.5028327Z 2025-08-14T21:59:39.5028449Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5028663Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5028744Z return mod(**inputs) 2025-08-14T21:59:39.5029001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5029079Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5029343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5029422Z layer_outputs = layer_module( 2025-08-14T21:59:39.5029748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5029867Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5030240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:39.5030349Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:39.5030604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:39.5030729Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:39.5030991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T21:59:39.5031082Z hidden_states = self.act(hidden_states) 2025-08-14T21:59:39.5031089Z 2025-08-14T21:59:39.5031207Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5031425Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5031522Z return mod(**inputs) 2025-08-14T21:59:39.5031846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5031930Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5032188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5032300Z layer_outputs = layer_module( 2025-08-14T21:59:39.5032540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5032635Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5032928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:39.5033029Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:39.5033289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:39.5033413Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:39.5033662Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T21:59:39.5033756Z hidden_states = self.wo(hidden_states) 2025-08-14T21:59:39.5033762Z 2025-08-14T21:59:39.5033873Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5034095Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5034166Z return mod(**inputs) 2025-08-14T21:59:39.5034423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5034512Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5034766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5034852Z layer_outputs = layer_module( 2025-08-14T21:59:39.5035101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5035188Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5035447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.5035536Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.5035786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.5035882Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.5036135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T21:59:39.5036227Z query_states = self.q(hidden_states) 2025-08-14T21:59:39.5036231Z 2025-08-14T21:59:39.5036342Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5036558Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5036637Z return mod(**inputs) 2025-08-14T21:59:39.5036893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5036974Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5037234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5037310Z layer_outputs = layer_module( 2025-08-14T21:59:39.5037557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5037645Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5037900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.5037995Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.5038301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.5038399Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.5038653Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T21:59:39.5038758Z key_states = self.k(current_states) 2025-08-14T21:59:39.5038762Z 2025-08-14T21:59:39.5038882Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5039102Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5039192Z return mod(**inputs) 2025-08-14T21:59:39.5039476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5039559Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5039827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5039905Z layer_outputs = layer_module( 2025-08-14T21:59:39.5040150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5040246Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5040506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.5040593Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.5040857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.5040950Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.5041216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T21:59:39.5041360Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:59:39.5041364Z 2025-08-14T21:59:39.5041480Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5041710Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5041937Z return mod(**inputs) 2025-08-14T21:59:39.5042350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5042434Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5042689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5042781Z layer_outputs = layer_module( 2025-08-14T21:59:39.5043022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5043111Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5043371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.5043458Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.5043719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.5043809Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.5044058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T21:59:39.5044154Z value_states = self.v(current_states) 2025-08-14T21:59:39.5044159Z 2025-08-14T21:59:39.5044297Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5044524Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5044597Z return mod(**inputs) 2025-08-14T21:59:39.5044852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5045025Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5045279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5045360Z layer_outputs = layer_module( 2025-08-14T21:59:39.5045641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5045726Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5045989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.5046105Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.5046391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.5046490Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.5046742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T21:59:39.5046863Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:59:39.5046876Z 2025-08-14T21:59:39.5046986Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5047201Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5047280Z return mod(**inputs) 2025-08-14T21:59:39.5047534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5047612Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5047877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5047955Z layer_outputs = layer_module( 2025-08-14T21:59:39.5048199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5048286Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5048537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.5048629Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.5048879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.5048966Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.5049221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T21:59:39.5049343Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:39.5049347Z 2025-08-14T21:59:39.5049465Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5049679Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5049750Z return mod(**inputs) 2025-08-14T21:59:39.5050012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5050092Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5050343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5050430Z layer_outputs = layer_module( 2025-08-14T21:59:39.5050666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5050758Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5051011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.5051096Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.5051354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.5051460Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.5051721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T21:59:39.5051804Z attn_output = self.o(attn_output) 2025-08-14T21:59:39.5051825Z 2025-08-14T21:59:39.5051935Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5052157Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5052226Z return mod(**inputs) 2025-08-14T21:59:39.5052497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5052616Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5052876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5052961Z layer_outputs = layer_module( 2025-08-14T21:59:39.5053204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5053288Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5053551Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:39.5053640Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:39.5053893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:39.5053989Z attention_output = self.EncDecAttention( 2025-08-14T21:59:39.5054248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T21:59:39.5054341Z query_states = self.q(hidden_states) 2025-08-14T21:59:39.5054345Z 2025-08-14T21:59:39.5054457Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5054677Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5054754Z return mod(**inputs) 2025-08-14T21:59:39.5055014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5055103Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5055375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5055452Z layer_outputs = layer_module( 2025-08-14T21:59:39.5055702Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5055791Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5056056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:39.5056151Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:39.5056417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:39.5056514Z attention_output = self.EncDecAttention( 2025-08-14T21:59:39.5056778Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T21:59:39.5056864Z key_states = self.k(current_states) 2025-08-14T21:59:39.5056868Z 2025-08-14T21:59:39.5056987Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5057205Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5057286Z return mod(**inputs) 2025-08-14T21:59:39.5057561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5057640Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5057938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5058017Z layer_outputs = layer_module( 2025-08-14T21:59:39.5058257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5058371Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5058633Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:39.5058728Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:39.5058990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:39.5059117Z attention_output = self.EncDecAttention( 2025-08-14T21:59:39.5059380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T21:59:39.5059601Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:59:39.5059612Z 2025-08-14T21:59:39.5059730Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5059956Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5060028Z return mod(**inputs) 2025-08-14T21:59:39.5060296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5060375Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5060645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5060735Z layer_outputs = layer_module( 2025-08-14T21:59:39.5060976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5061066Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5061331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:39.5061417Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:39.5061683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:39.5061774Z attention_output = self.EncDecAttention( 2025-08-14T21:59:39.5062037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T21:59:39.5062134Z value_states = self.v(current_states) 2025-08-14T21:59:39.5062137Z 2025-08-14T21:59:39.5062249Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5062478Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5062549Z return mod(**inputs) 2025-08-14T21:59:39.5062828Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5062919Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5063192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5063291Z layer_outputs = layer_module( 2025-08-14T21:59:39.5063635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5063721Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5063986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:39.5064075Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:39.5064340Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:39.5064439Z attention_output = self.EncDecAttention( 2025-08-14T21:59:39.5064724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T21:59:39.5064854Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:59:39.5064858Z 2025-08-14T21:59:39.5064968Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5065204Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5065283Z return mod(**inputs) 2025-08-14T21:59:39.5065539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5065618Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5065917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5065995Z layer_outputs = layer_module( 2025-08-14T21:59:39.5066238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5066323Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5066575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:39.5066669Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:39.5066920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:39.5067009Z attention_output = self.EncDecAttention( 2025-08-14T21:59:39.5067265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T21:59:39.5067385Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:39.5067389Z 2025-08-14T21:59:39.5067508Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5067721Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5067790Z return mod(**inputs) 2025-08-14T21:59:39.5068053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5068133Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5068390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5068467Z layer_outputs = layer_module( 2025-08-14T21:59:39.5068703Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5068795Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5069049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:39.5069136Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:39.5069397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:39.5069487Z attention_output = self.EncDecAttention( 2025-08-14T21:59:39.5069744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T21:59:39.5069827Z attn_output = self.o(attn_output) 2025-08-14T21:59:39.5069833Z 2025-08-14T21:59:39.5069943Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5070164Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5070233Z return mod(**inputs) 2025-08-14T21:59:39.5070496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5070577Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5070833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5070917Z layer_outputs = layer_module( 2025-08-14T21:59:39.5071177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5071263Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5071525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:39.5071644Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:39.5071912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:39.5072039Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:39.5072330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T21:59:39.5072428Z hidden_states = self.wi(hidden_states) 2025-08-14T21:59:39.5072433Z 2025-08-14T21:59:39.5072543Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5072769Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5072840Z return mod(**inputs) 2025-08-14T21:59:39.5073094Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5073184Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5073437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5073514Z layer_outputs = layer_module( 2025-08-14T21:59:39.5073761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5073847Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5074178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:39.5074301Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:39.5074636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:39.5074783Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:39.5075163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T21:59:39.5075253Z hidden_states = self.act(hidden_states) 2025-08-14T21:59:39.5075267Z 2025-08-14T21:59:39.5075379Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5075595Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5075676Z return mod(**inputs) 2025-08-14T21:59:39.5075931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5076009Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5076273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5076351Z layer_outputs = layer_module( 2025-08-14T21:59:39.5076587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5076682Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5076933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:39.5077038Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:39.5077295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:39.5077419Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:39.5077678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T21:59:39.5077794Z hidden_states = self.wo(hidden_states) 2025-08-14T21:59:39.5077798Z 2025-08-14T21:59:39.5077917Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5078135Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5078227Z return mod(**inputs) 2025-08-14T21:59:39.5078494Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5078575Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5078831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5078971Z layer_outputs = layer_module( 2025-08-14T21:59:39.5079212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5079305Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5079561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.5079648Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.5079908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.5079998Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.5080258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T21:59:39.5080342Z query_states = self.q(hidden_states) 2025-08-14T21:59:39.5080348Z 2025-08-14T21:59:39.5080460Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5080683Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5080754Z return mod(**inputs) 2025-08-14T21:59:39.5081008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5081097Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5081351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5081438Z layer_outputs = layer_module( 2025-08-14T21:59:39.5081674Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5081759Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5082017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.5082106Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.5082359Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.5082455Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.5082706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T21:59:39.5082796Z key_states = self.k(current_states) 2025-08-14T21:59:39.5082800Z 2025-08-14T21:59:39.5082909Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5083125Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5083202Z return mod(**inputs) 2025-08-14T21:59:39.5083457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5083542Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5083797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5083873Z layer_outputs = layer_module( 2025-08-14T21:59:39.5084144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5084229Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5084482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.5084598Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.5084846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.5084938Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.5085188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T21:59:39.5085365Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:59:39.5085369Z 2025-08-14T21:59:39.5085488Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5085702Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5085780Z return mod(**inputs) 2025-08-14T21:59:39.5086038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5086117Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5086388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5086465Z layer_outputs = layer_module( 2025-08-14T21:59:39.5086709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5086805Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5087065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.5087160Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.5087420Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.5087508Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.5087776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T21:59:39.5087862Z value_states = self.v(current_states) 2025-08-14T21:59:39.5087866Z 2025-08-14T21:59:39.5087977Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5088208Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5088279Z return mod(**inputs) 2025-08-14T21:59:39.5088551Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5088630Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5088888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5088977Z layer_outputs = layer_module( 2025-08-14T21:59:39.5089219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5089305Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5089573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.5089660Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.5089926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.5090016Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.5090275Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T21:59:39.5090401Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:59:39.5090405Z 2025-08-14T21:59:39.5090545Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5090774Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5090847Z return mod(**inputs) 2025-08-14T21:59:39.5091101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5091209Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5091463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5091541Z layer_outputs = layer_module( 2025-08-14T21:59:39.5091829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5091917Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5092180Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.5092267Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.5092519Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.5092613Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.5092865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T21:59:39.5092988Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:39.5092992Z 2025-08-14T21:59:39.5093102Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5093316Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5093397Z return mod(**inputs) 2025-08-14T21:59:39.5093651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5093728Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5093991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5094067Z layer_outputs = layer_module( 2025-08-14T21:59:39.5094311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5094397Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5094648Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.5094743Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.5095006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.5095093Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.5095350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T21:59:39.5095435Z attn_output = self.o(attn_output) 2025-08-14T21:59:39.5095439Z 2025-08-14T21:59:39.5095558Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5095771Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5095845Z return mod(**inputs) 2025-08-14T21:59:39.5096118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5096197Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5096466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5096547Z layer_outputs = layer_module( 2025-08-14T21:59:39.5096786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5096878Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5097157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.5097246Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.5097514Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 609, in forward 2025-08-14T21:59:39.5097679Z hidden_states = hidden_states + self.dropout(attention_output[0]) 2025-08-14T21:59:39.5097683Z 2025-08-14T21:59:39.5097799Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5098021Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5098133Z return mod(**inputs) 2025-08-14T21:59:39.5098396Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5098474Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5098745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5098822Z layer_outputs = layer_module( 2025-08-14T21:59:39.5099061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5099156Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5099414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:39.5099732Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:39.5100052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:39.5100148Z attention_output = self.EncDecAttention( 2025-08-14T21:59:39.5100415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T21:59:39.5100499Z query_states = self.q(hidden_states) 2025-08-14T21:59:39.5100504Z 2025-08-14T21:59:39.5100618Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5100842Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5100913Z return mod(**inputs) 2025-08-14T21:59:39.5101172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5101262Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5101527Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5101616Z layer_outputs = layer_module( 2025-08-14T21:59:39.5101864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5101949Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5102213Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:39.5102302Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:39.5102577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:39.5102670Z attention_output = self.EncDecAttention( 2025-08-14T21:59:39.5102934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T21:59:39.5103024Z key_states = self.k(current_states) 2025-08-14T21:59:39.5103028Z 2025-08-14T21:59:39.5103139Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5103355Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5103434Z return mod(**inputs) 2025-08-14T21:59:39.5103701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5103814Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5104078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5104155Z layer_outputs = layer_module( 2025-08-14T21:59:39.5104439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5104524Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5104786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:39.5104899Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:39.5105174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:39.5105277Z attention_output = self.EncDecAttention( 2025-08-14T21:59:39.5105540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T21:59:39.5105683Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:59:39.5105687Z 2025-08-14T21:59:39.5105813Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5106034Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5106115Z return mod(**inputs) 2025-08-14T21:59:39.5106373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5106458Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5106727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5106806Z layer_outputs = layer_module( 2025-08-14T21:59:39.5107046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5107145Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5107403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:39.5107501Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:39.5107783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:39.5107876Z attention_output = self.EncDecAttention( 2025-08-14T21:59:39.5108138Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T21:59:39.5108231Z value_states = self.v(current_states) 2025-08-14T21:59:39.5108235Z 2025-08-14T21:59:39.5108355Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5108571Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5108644Z return mod(**inputs) 2025-08-14T21:59:39.5108912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5108994Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5109249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5109340Z layer_outputs = layer_module( 2025-08-14T21:59:39.5109585Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5109681Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5109942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:39.5110034Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:39.5110295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:39.5110410Z attention_output = self.EncDecAttention( 2025-08-14T21:59:39.5110663Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T21:59:39.5110792Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:59:39.5110843Z 2025-08-14T21:59:39.5110956Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5111181Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5111259Z return mod(**inputs) 2025-08-14T21:59:39.5111534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5111640Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5111893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5111978Z layer_outputs = layer_module( 2025-08-14T21:59:39.5112218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5112301Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5112559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:39.5112650Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:39.5112900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:39.5112998Z attention_output = self.EncDecAttention( 2025-08-14T21:59:39.5113252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T21:59:39.5113376Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:39.5113380Z 2025-08-14T21:59:39.5113492Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5113708Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5113789Z return mod(**inputs) 2025-08-14T21:59:39.5114045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5114127Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5114391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5114466Z layer_outputs = layer_module( 2025-08-14T21:59:39.5114713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5114797Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5115049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:39.5115143Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:39.5115393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:39.5115487Z attention_output = self.EncDecAttention( 2025-08-14T21:59:39.5115745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T21:59:39.5115829Z attn_output = self.o(attn_output) 2025-08-14T21:59:39.5115834Z 2025-08-14T21:59:39.5115954Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5116168Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5116242Z return mod(**inputs) 2025-08-14T21:59:39.5116505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5116584Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5117838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5117930Z layer_outputs = layer_module( 2025-08-14T21:59:39.5118170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5118284Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5118543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:39.5118642Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:39.5118934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:39.5119084Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:39.5119348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T21:59:39.5119437Z hidden_states = self.wi(hidden_states) 2025-08-14T21:59:39.5119441Z 2025-08-14T21:59:39.5119553Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5119783Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5119856Z return mod(**inputs) 2025-08-14T21:59:39.5120135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5120212Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5120482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5120569Z layer_outputs = layer_module( 2025-08-14T21:59:39.5120813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5120895Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5121167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:39.5121265Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:39.5121538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:39.5121667Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:39.5121931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T21:59:39.5122027Z hidden_states = self.act(hidden_states) 2025-08-14T21:59:39.5122032Z 2025-08-14T21:59:39.5122149Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5122375Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5122446Z return mod(**inputs) 2025-08-14T21:59:39.5122717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5122805Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5123072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5123153Z layer_outputs = layer_module( 2025-08-14T21:59:39.5123408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5123493Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5123767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:39.5123867Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:39.5124125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:39.5124256Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:39.5124541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T21:59:39.5124628Z hidden_states = self.wo(hidden_states) 2025-08-14T21:59:39.5124641Z 2025-08-14T21:59:39.5124771Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5124989Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5125068Z return mod(**inputs) 2025-08-14T21:59:39.5125334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5125450Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5125730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5125807Z layer_outputs = layer_module( 2025-08-14T21:59:39.5126054Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5126138Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5126398Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.5126495Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.5126744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.5126833Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.5127095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T21:59:39.5127182Z query_states = self.q(hidden_states) 2025-08-14T21:59:39.5127186Z 2025-08-14T21:59:39.5127305Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5127519Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5127592Z return mod(**inputs) 2025-08-14T21:59:39.5127857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5127936Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5128193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5128279Z layer_outputs = layer_module( 2025-08-14T21:59:39.5128518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5128612Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5128863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.5128950Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.5129209Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.5129298Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.5129555Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T21:59:39.5129640Z key_states = self.k(current_states) 2025-08-14T21:59:39.5129644Z 2025-08-14T21:59:39.5129754Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5129982Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5130057Z return mod(**inputs) 2025-08-14T21:59:39.5130321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5130408Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5130663Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5130778Z layer_outputs = layer_module( 2025-08-14T21:59:39.5131017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5131100Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5131378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.5131464Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.5131720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.5131833Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.5132099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T21:59:39.5132252Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:59:39.5132256Z 2025-08-14T21:59:39.5132366Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5132579Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5132660Z return mod(**inputs) 2025-08-14T21:59:39.5132915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5133004Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5133259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5133335Z layer_outputs = layer_module( 2025-08-14T21:59:39.5133585Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5133668Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5133919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.5134013Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.5134264Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.5134357Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.5134612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T21:59:39.5134696Z value_states = self.v(current_states) 2025-08-14T21:59:39.5134700Z 2025-08-14T21:59:39.5134818Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5135035Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5135119Z return mod(**inputs) 2025-08-14T21:59:39.5135373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5135452Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5135713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5135790Z layer_outputs = layer_module( 2025-08-14T21:59:39.5136025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5136122Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5136375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.5136471Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.5136727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.5136815Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.5137075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T21:59:39.5137217Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:59:39.5137222Z 2025-08-14T21:59:39.5137334Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5137561Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5137655Z return mod(**inputs) 2025-08-14T21:59:39.5137917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5137996Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5138272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5138404Z layer_outputs = layer_module( 2025-08-14T21:59:39.5138646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5138740Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5139000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.5139087Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.5139350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.5139439Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.5139887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T21:59:39.5140026Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:39.5140036Z 2025-08-14T21:59:39.5140153Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5140381Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5140453Z return mod(**inputs) 2025-08-14T21:59:39.5140721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5140814Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5141076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5141156Z layer_outputs = layer_module( 2025-08-14T21:59:39.5141411Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5141496Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5141974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.5142125Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.5142466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.5142569Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.5142827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T21:59:39.5142921Z attn_output = self.o(attn_output) 2025-08-14T21:59:39.5142925Z 2025-08-14T21:59:39.5143042Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5143262Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5143347Z return mod(**inputs) 2025-08-14T21:59:39.5143611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5143694Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5143966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5144041Z layer_outputs = layer_module( 2025-08-14T21:59:39.5144395Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5144484Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5144735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:39.5144867Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:39.5145119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:39.5145210Z attention_output = self.EncDecAttention( 2025-08-14T21:59:39.5145504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T21:59:39.5145618Z query_states = self.q(hidden_states) 2025-08-14T21:59:39.5145622Z 2025-08-14T21:59:39.5145740Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5145958Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5146031Z return mod(**inputs) 2025-08-14T21:59:39.5146304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5146382Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5146655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5146732Z layer_outputs = layer_module( 2025-08-14T21:59:39.5146977Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5147071Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5147332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:39.5147417Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:39.5147683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:39.5147772Z attention_output = self.EncDecAttention( 2025-08-14T21:59:39.5148037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T21:59:39.5148123Z key_states = self.k(current_states) 2025-08-14T21:59:39.5148127Z 2025-08-14T21:59:39.5148240Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5148478Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5148550Z return mod(**inputs) 2025-08-14T21:59:39.5148826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5148906Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5149167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5149251Z layer_outputs = layer_module( 2025-08-14T21:59:39.5149493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5149576Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5149845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:39.5149930Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:39.5150195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:39.5150290Z attention_output = self.EncDecAttention( 2025-08-14T21:59:39.5150548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T21:59:39.5150699Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:59:39.5150703Z 2025-08-14T21:59:39.5150837Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5151053Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5151133Z return mod(**inputs) 2025-08-14T21:59:39.5151387Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5151492Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5151748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5151826Z layer_outputs = layer_module( 2025-08-14T21:59:39.5152116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5152203Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5152467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:39.5152556Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:39.5152809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:39.5152907Z attention_output = self.EncDecAttention( 2025-08-14T21:59:39.5153160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T21:59:39.5153245Z value_states = self.v(current_states) 2025-08-14T21:59:39.5153249Z 2025-08-14T21:59:39.5153367Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5153590Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5153672Z return mod(**inputs) 2025-08-14T21:59:39.5153929Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5154007Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5154270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5154348Z layer_outputs = layer_module( 2025-08-14T21:59:39.5154586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5154680Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5154930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:39.5155023Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:39.5155277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:39.5155366Z attention_output = self.EncDecAttention( 2025-08-14T21:59:39.5155625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T21:59:39.5155742Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:59:39.5155746Z 2025-08-14T21:59:39.5155862Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5156074Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5156147Z return mod(**inputs) 2025-08-14T21:59:39.5156408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5156486Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5156743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5156828Z layer_outputs = layer_module( 2025-08-14T21:59:39.5157068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5157157Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5157428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:39.5157515Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:39.5157772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:39.5157883Z attention_output = self.EncDecAttention( 2025-08-14T21:59:39.5158143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T21:59:39.5158270Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:39.5158290Z 2025-08-14T21:59:39.5158419Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5158645Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5158719Z return mod(**inputs) 2025-08-14T21:59:39.5158989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5159076Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5159348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5159436Z layer_outputs = layer_module( 2025-08-14T21:59:39.5159674Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5159757Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5160020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:39.5160108Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:39.5160365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:39.5160461Z attention_output = self.EncDecAttention( 2025-08-14T21:59:39.5160724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T21:59:39.5160814Z attn_output = self.o(attn_output) 2025-08-14T21:59:39.5160818Z 2025-08-14T21:59:39.5160929Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5161141Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5161221Z return mod(**inputs) 2025-08-14T21:59:39.5161492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5161581Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5161852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5161928Z layer_outputs = layer_module( 2025-08-14T21:59:39.5162176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5162259Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5162516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:39.5162612Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:39.5162869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 647, in forward 2025-08-14T21:59:39.5163019Z layer_output = hidden_states + self.dropout(attention_output[0]) 2025-08-14T21:59:39.5163023Z 2025-08-14T21:59:39.5163136Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5163348Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5163428Z return mod(**inputs) 2025-08-14T21:59:39.5163702Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5163782Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5164058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5164154Z layer_outputs = layer_module( 2025-08-14T21:59:39.5164400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5164484Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5164743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:39.5164881Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:39.5165149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:39.5165281Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:39.5165545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T21:59:39.5165630Z hidden_states = self.wi(hidden_states) 2025-08-14T21:59:39.5165634Z 2025-08-14T21:59:39.5165754Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5165978Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5166049Z return mod(**inputs) 2025-08-14T21:59:39.5166320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5166403Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5166679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5166756Z layer_outputs = layer_module( 2025-08-14T21:59:39.5166996Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5167087Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5167346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:39.5167445Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:39.5167705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:39.5167830Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:39.5168089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T21:59:39.5168181Z hidden_states = self.act(hidden_states) 2025-08-14T21:59:39.5168184Z 2025-08-14T21:59:39.5168296Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5168520Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5168595Z return mod(**inputs) 2025-08-14T21:59:39.5168858Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5168938Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5169198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5169283Z layer_outputs = layer_module( 2025-08-14T21:59:39.5169520Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5169605Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5169866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:39.5169962Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:39.5170244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:39.5170372Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:39.5170624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T21:59:39.5170739Z hidden_states = self.wo(hidden_states) 2025-08-14T21:59:39.5170743Z 2025-08-14T21:59:39.5170852Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5171075Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5171145Z return mod(**inputs) 2025-08-14T21:59:39.5171442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5171531Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5171786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5171863Z layer_outputs = layer_module( 2025-08-14T21:59:39.5172125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5172208Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5172469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.5172556Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.5172811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.5172911Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.5173163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T21:59:39.5173246Z query_states = self.q(hidden_states) 2025-08-14T21:59:39.5173258Z 2025-08-14T21:59:39.5173371Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5173584Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5173664Z return mod(**inputs) 2025-08-14T21:59:39.5173919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5174000Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5174262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5174339Z layer_outputs = layer_module( 2025-08-14T21:59:39.5174586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5174670Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5174926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.5175022Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.5175272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.5175361Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.5175625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T21:59:39.5175708Z key_states = self.k(current_states) 2025-08-14T21:59:39.5175712Z 2025-08-14T21:59:39.5175830Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5176043Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5176116Z return mod(**inputs) 2025-08-14T21:59:39.5176378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5176457Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5176726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5176812Z layer_outputs = layer_module( 2025-08-14T21:59:39.5177055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5177166Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5177417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.5177502Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.5177820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.5177908Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.5178163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T21:59:39.5178304Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:59:39.5178308Z 2025-08-14T21:59:39.5178418Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5178635Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5178707Z return mod(**inputs) 2025-08-14T21:59:39.5178958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5179044Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5179302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5179386Z layer_outputs = layer_module( 2025-08-14T21:59:39.5179727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5179818Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5180085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.5180172Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.5180430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.5180521Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.5180772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T21:59:39.5180866Z value_states = self.v(current_states) 2025-08-14T21:59:39.5180872Z 2025-08-14T21:59:39.5180986Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5181198Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5181277Z return mod(**inputs) 2025-08-14T21:59:39.5181533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5181620Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5181875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5181954Z layer_outputs = layer_module( 2025-08-14T21:59:39.5182198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5182282Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5182533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.5182628Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.5182880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.5182975Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.5183248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T21:59:39.5183370Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:59:39.5183375Z 2025-08-14T21:59:39.5183515Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5183733Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5183814Z return mod(**inputs) 2025-08-14T21:59:39.5184071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5184186Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5184451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5184529Z layer_outputs = layer_module( 2025-08-14T21:59:39.5184770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5184864Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5185115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.5185212Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.5185461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.5185549Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.5185809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T21:59:39.5185929Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:39.5185933Z 2025-08-14T21:59:39.5186050Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5186266Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5186337Z return mod(**inputs) 2025-08-14T21:59:39.5186599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5186677Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5186933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5187018Z layer_outputs = layer_module( 2025-08-14T21:59:39.5187255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5187351Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5187601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.5187686Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.5187947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.5188035Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.5188287Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T21:59:39.5188379Z attn_output = self.o(attn_output) 2025-08-14T21:59:39.5188383Z 2025-08-14T21:59:39.5188494Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5188716Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5188787Z return mod(**inputs) 2025-08-14T21:59:39.5189047Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5189134Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5189390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5189486Z layer_outputs = layer_module( 2025-08-14T21:59:39.5189737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5189823Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5190102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:39.5190189Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:39.5190439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:39.5190575Z attention_output = self.EncDecAttention( 2025-08-14T21:59:39.5190826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T21:59:39.5190922Z query_states = self.q(hidden_states) 2025-08-14T21:59:39.5190926Z 2025-08-14T21:59:39.5191039Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5191254Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5191332Z return mod(**inputs) 2025-08-14T21:59:39.5191587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5191666Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5191930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5192005Z layer_outputs = layer_module( 2025-08-14T21:59:39.5192253Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5192336Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5192584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:39.5192679Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:39.5192932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:39.5193021Z attention_output = self.EncDecAttention( 2025-08-14T21:59:39.5193280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T21:59:39.5193363Z key_states = self.k(current_states) 2025-08-14T21:59:39.5193366Z 2025-08-14T21:59:39.5193484Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5193698Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5193772Z return mod(**inputs) 2025-08-14T21:59:39.5194035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5194119Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5194381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5194456Z layer_outputs = layer_module( 2025-08-14T21:59:39.5194692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5194786Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5195038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:39.5195122Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:39.5195384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:39.5195473Z attention_output = self.EncDecAttention( 2025-08-14T21:59:39.5195727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T21:59:39.5195887Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:59:39.5195891Z 2025-08-14T21:59:39.5196002Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5196225Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5196315Z return mod(**inputs) 2025-08-14T21:59:39.5196579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5196657Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5196929Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5197035Z layer_outputs = layer_module( 2025-08-14T21:59:39.5197272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5197356Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5197616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:39.5197701Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:39.5197960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:39.5198048Z attention_output = self.EncDecAttention( 2025-08-14T21:59:39.5198300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T21:59:39.5198394Z value_states = self.v(current_states) 2025-08-14T21:59:39.5198400Z 2025-08-14T21:59:39.5198511Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5198724Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5198802Z return mod(**inputs) 2025-08-14T21:59:39.5199058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5199144Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5199399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5199479Z layer_outputs = layer_module( 2025-08-14T21:59:39.5199724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5199809Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5200076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:39.5200165Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:39.5200431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:39.5200527Z attention_output = self.EncDecAttention( 2025-08-14T21:59:39.5200787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T21:59:39.5200906Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:59:39.5200910Z 2025-08-14T21:59:39.5201033Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5201245Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5201323Z return mod(**inputs) 2025-08-14T21:59:39.5201586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5201668Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5201931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5202008Z layer_outputs = layer_module( 2025-08-14T21:59:39.5202266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5202360Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5202621Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:39.5202741Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:39.5203001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:39.5203089Z attention_output = self.EncDecAttention( 2025-08-14T21:59:39.5203370Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T21:59:39.5203505Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:39.5206722Z 2025-08-14T21:59:39.5206847Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5207083Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5207157Z return mod(**inputs) 2025-08-14T21:59:39.5207439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5207525Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5207798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5207887Z layer_outputs = layer_module( 2025-08-14T21:59:39.5208126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5208220Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5208484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:39.5208607Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:39.5208882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:39.5208973Z attention_output = self.EncDecAttention( 2025-08-14T21:59:39.5209236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T21:59:39.5209329Z attn_output = self.o(attn_output) 2025-08-14T21:59:39.5209334Z 2025-08-14T21:59:39.5209446Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5209663Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5209740Z return mod(**inputs) 2025-08-14T21:59:39.5210014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5210101Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5210357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5210437Z layer_outputs = layer_module( 2025-08-14T21:59:39.5210684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5210767Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5211019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:39.5211126Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:39.5211376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:39.5211513Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:39.5211766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T21:59:39.5211852Z hidden_states = self.wi(hidden_states) 2025-08-14T21:59:39.5211855Z 2025-08-14T21:59:39.5211999Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5212217Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5212296Z return mod(**inputs) 2025-08-14T21:59:39.5212572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5212653Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5212914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5212990Z layer_outputs = layer_module( 2025-08-14T21:59:39.5213245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5213337Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5213673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:39.5213777Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:39.5214026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:39.5214152Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:39.5214409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T21:59:39.5214496Z hidden_states = self.act(hidden_states) 2025-08-14T21:59:39.5214499Z 2025-08-14T21:59:39.5214616Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5214831Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5214902Z return mod(**inputs) 2025-08-14T21:59:39.5215165Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5215247Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5215499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5215582Z layer_outputs = layer_module( 2025-08-14T21:59:39.5215820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5215910Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5216159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:39.5216254Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:39.5216512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:39.5216636Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:39.5216889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T21:59:39.5216985Z hidden_states = self.wo(hidden_states) 2025-08-14T21:59:39.5216989Z 2025-08-14T21:59:39.5217100Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5217326Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5217398Z return mod(**inputs) 2025-08-14T21:59:39.5217652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5217738Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5217993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5218078Z layer_outputs = layer_module( 2025-08-14T21:59:39.5218318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5218425Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5218686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:39.5218784Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:39.5219066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 343, in forward 2025-08-14T21:59:39.5219213Z hidden_states = hidden_states + self.dropout(forwarded_states) 2025-08-14T21:59:39.5219217Z 2025-08-14T21:59:39.5219329Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5219671Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5219754Z return mod(**inputs) 2025-08-14T21:59:39.5220015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5220141Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5220400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5220487Z layer_outputs = layer_module( 2025-08-14T21:59:39.5220728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5220817Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5221079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.5221168Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.5221423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.5221525Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.5221781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T21:59:39.5221878Z query_states = self.q(hidden_states) 2025-08-14T21:59:39.5221883Z 2025-08-14T21:59:39.5221997Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5222213Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5222298Z return mod(**inputs) 2025-08-14T21:59:39.5222554Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5222633Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5222898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5222976Z layer_outputs = layer_module( 2025-08-14T21:59:39.5223222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5223309Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5223563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.5223661Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.5223911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.5224012Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.5224267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T21:59:39.5224351Z key_states = self.k(current_states) 2025-08-14T21:59:39.5224355Z 2025-08-14T21:59:39.5224476Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5224693Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5224765Z return mod(**inputs) 2025-08-14T21:59:39.5225053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5225136Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5225400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5225498Z layer_outputs = layer_module( 2025-08-14T21:59:39.5225736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5225828Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5226097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.5226185Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.5226444Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.5226568Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.5226829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T21:59:39.5226969Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:59:39.5226973Z 2025-08-14T21:59:39.5227087Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5227310Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5227380Z return mod(**inputs) 2025-08-14T21:59:39.5227642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5227722Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5227973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5228060Z layer_outputs = layer_module( 2025-08-14T21:59:39.5228296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5228379Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5228638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.5228724Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.5228982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.5229068Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.5229324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T21:59:39.5229416Z value_states = self.v(current_states) 2025-08-14T21:59:39.5229419Z 2025-08-14T21:59:39.5229530Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5229749Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5229820Z return mod(**inputs) 2025-08-14T21:59:39.5230073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5230160Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5230413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5230489Z layer_outputs = layer_module( 2025-08-14T21:59:39.5230731Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5230818Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5231074Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.5231161Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.5231432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.5231529Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.5231780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T21:59:39.5231916Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:59:39.5231927Z 2025-08-14T21:59:39.5232038Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5232253Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5232334Z return mod(**inputs) 2025-08-14T21:59:39.5232611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5232693Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5232979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5233058Z layer_outputs = layer_module( 2025-08-14T21:59:39.5233305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5233391Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5233645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.5233739Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.5233990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.5234078Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.5234337Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T21:59:39.5234460Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:39.5234463Z 2025-08-14T21:59:39.5234585Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5234800Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5234870Z return mod(**inputs) 2025-08-14T21:59:39.5235135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5235215Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5235468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5235552Z layer_outputs = layer_module( 2025-08-14T21:59:39.5235793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5235887Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5236139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-08-14T21:59:39.5236227Z self_attention_outputs = self.layer[0]( 2025-08-14T21:59:39.5236488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-08-14T21:59:39.5236576Z attention_output = self.SelfAttention( 2025-08-14T21:59:39.5236836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T21:59:39.5236919Z attn_output = self.o(attn_output) 2025-08-14T21:59:39.5236923Z 2025-08-14T21:59:39.5237033Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5237273Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5237343Z return mod(**inputs) 2025-08-14T21:59:39.5237596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5237687Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5237961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5238047Z layer_outputs = layer_module( 2025-08-14T21:59:39.5238295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5238396Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5238651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:39.5238733Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:39.5238990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:39.5239091Z attention_output = self.EncDecAttention( 2025-08-14T21:59:39.5239356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 490, in forward 2025-08-14T21:59:39.5239449Z query_states = self.q(hidden_states) 2025-08-14T21:59:39.5239454Z 2025-08-14T21:59:39.5239566Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5239786Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5239868Z return mod(**inputs) 2025-08-14T21:59:39.5240125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5240212Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5240467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5240545Z layer_outputs = layer_module( 2025-08-14T21:59:39.5240789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5240875Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5241128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:39.5241222Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:39.5241485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:39.5241583Z attention_output = self.EncDecAttention( 2025-08-14T21:59:39.5242192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 510, in forward 2025-08-14T21:59:39.5242328Z key_states = self.k(current_states) 2025-08-14T21:59:39.5242335Z 2025-08-14T21:59:39.5242522Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5242739Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5242820Z return mod(**inputs) 2025-08-14T21:59:39.5243089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5243166Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5243441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5243518Z layer_outputs = layer_module( 2025-08-14T21:59:39.5243756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5243848Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5244112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:39.5244203Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:39.5244465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:39.5244554Z attention_output = self.EncDecAttention( 2025-08-14T21:59:39.5244898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-08-14T21:59:39.5245054Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-08-14T21:59:39.5245126Z 2025-08-14T21:59:39.5245237Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5245456Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5245524Z return mod(**inputs) 2025-08-14T21:59:39.5245794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5245900Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5246164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5246277Z layer_outputs = layer_module( 2025-08-14T21:59:39.5246512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5246599Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5246855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:39.5246940Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:39.5247199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:39.5247286Z attention_output = self.EncDecAttention( 2025-08-14T21:59:39.5247541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 511, in forward 2025-08-14T21:59:39.5247634Z value_states = self.v(current_states) 2025-08-14T21:59:39.5247638Z 2025-08-14T21:59:39.5247747Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5247968Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5248038Z return mod(**inputs) 2025-08-14T21:59:39.5248304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5248392Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5248651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5248727Z layer_outputs = layer_module( 2025-08-14T21:59:39.5248970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5249052Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5249321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:39.5249406Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:39.5249660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:39.5249754Z attention_output = self.EncDecAttention( 2025-08-14T21:59:39.5250008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-08-14T21:59:39.5250131Z attn_output = torch.matmul(attn_weights, value_states) 2025-08-14T21:59:39.5250135Z 2025-08-14T21:59:39.5250246Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5250470Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5250551Z return mod(**inputs) 2025-08-14T21:59:39.5250823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5250904Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5251203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5251282Z layer_outputs = layer_module( 2025-08-14T21:59:39.5251526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5251610Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5251882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:39.5251975Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:39.5252227Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:39.5252348Z attention_output = self.EncDecAttention( 2025-08-14T21:59:39.5252614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-08-14T21:59:39.5252751Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-08-14T21:59:39.5252756Z 2025-08-14T21:59:39.5252875Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5253089Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5253161Z return mod(**inputs) 2025-08-14T21:59:39.5253428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5253506Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5253768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5253844Z layer_outputs = layer_module( 2025-08-14T21:59:39.5254083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5254174Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5254427Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-08-14T21:59:39.5254512Z cross_attention_outputs = self.layer[1]( 2025-08-14T21:59:39.5254770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-08-14T21:59:39.5254863Z attention_output = self.EncDecAttention( 2025-08-14T21:59:39.5255119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 569, in forward 2025-08-14T21:59:39.5255203Z attn_output = self.o(attn_output) 2025-08-14T21:59:39.5255207Z 2025-08-14T21:59:39.5255317Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5255540Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5255612Z return mod(**inputs) 2025-08-14T21:59:39.5255878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5255959Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5256211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5256296Z layer_outputs = layer_module( 2025-08-14T21:59:39.5256537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5256619Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5256878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:39.5256976Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:39.5257237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:39.5257367Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:39.5257636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 287, in forward 2025-08-14T21:59:39.5257734Z hidden_states = self.wi(hidden_states) 2025-08-14T21:59:39.5257737Z 2025-08-14T21:59:39.5257850Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5258066Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5258165Z return mod(**inputs) 2025-08-14T21:59:39.5258424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5258512Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5258789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5258868Z layer_outputs = layer_module( 2025-08-14T21:59:39.5259133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5259219Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5259474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:39.5259635Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:39.5259895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:39.5260028Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:39.5260279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward 2025-08-14T21:59:39.5260370Z hidden_states = self.act(hidden_states) 2025-08-14T21:59:39.5260374Z 2025-08-14T21:59:39.5260493Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5260707Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5260787Z return mod(**inputs) 2025-08-14T21:59:39.5261042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-08-14T21:59:39.5261123Z decoder_outputs = self.decoder( 2025-08-14T21:59:39.5261386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-08-14T21:59:39.5261466Z layer_outputs = layer_module( 2025-08-14T21:59:39.5261705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T21:59:39.5261798Z return super().__call__(*args, **kwargs) 2025-08-14T21:59:39.5262050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 731, in forward 2025-08-14T21:59:39.5262155Z hidden_states = self.layer[-1](hidden_states) 2025-08-14T21:59:39.5262408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 342, in forward 2025-08-14T21:59:39.5262534Z forwarded_states = self.DenseReluDense(forwarded_states) 2025-08-14T21:59:39.5262797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 296, in forward 2025-08-14T21:59:39.5262883Z hidden_states = self.wo(hidden_states) 2025-08-14T21:59:39.5262889Z 2025-08-14T21:59:39.5263009Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5263223Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5263295Z return mod(**inputs) 2025-08-14T21:59:39.5263560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1791, in forward 2025-08-14T21:59:39.5263654Z lm_logits = self.lm_head(sequence_output) 2025-08-14T21:59:39.5263658Z 2025-08-14T21:59:39.5263777Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T21:59:39.5264011Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T21:59:39.5264084Z return mod(**inputs) 2025-08-14T21:59:39.5264348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1798, in forward 2025-08-14T21:59:39.5264508Z loss = loss_fct(lm_logits.view(-1, lm_logits.size(-1)), labels.view(-1)) 2025-08-14T21:59:39.5264533Z 2025-08-14T21:59:48.8378720Z Compilation time (from dynamo_timed): 20.107040066 2025-08-14T21:59:48.8525374Z pass 2025-08-14T21:59:48.8525928Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:59:48.8532953Z TIMING: _recursive_pre_grad_passes:0.06104 _recursive_joint_graph_passes:0.58713 _recursive_post_grad_passes:0.20108 async_compile.wait:0.00555 code_gen:8.69071 inductor_compile:11.20956 backend_compile:17.14415 gc:0.00099 entire_frame_compile:20.10704 total_wall_time:20.10704 2025-08-14T21:59:48.8534116Z STATS: call_* op count: 810 | FakeTensorMode.__torch_dispatch__:34635 | FakeTensor.__torch_dispatch__:5221 | ProxyTorchDispatchMode.__torch_dispatch__:8556 2025-08-14T21:59:48.8534669Z Dynamo produced 1 graphs covering 810 ops with 0 graph breaks (0 unique) 2025-08-14T21:59:54.8131724Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T21:59:54.8132901Z from pkg_resources import resource_filename 2025-08-14T21:59:55.5390343Z 2025-08-14T21:59:58.2757149Z loading model: 0it [00:00, ?it/s] 2025-08-14T21:59:58.2757515Z loading model: 0it [00:02, ?it/s] 2025-08-14T21:59:58.2777977Z cpu eval TrOCRForCausalLM 2025-08-14T21:59:58.4504011Z WARNING:common:fp64 golden ref were not generated for TrOCRForCausalLM. Setting accuracy check to cosine 2025-08-14T21:59:58.4804271Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:59:58.7392350Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T21:59:58.9957807Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:00:09.0587324Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0587732Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0587970Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0588263Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0588570Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0588922Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0589244Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0589499Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0589836Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0590150Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0590463Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0590695Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0590926Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0591154Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0591389Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0591618Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0591850Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0592096Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0592328Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0592549Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0592778Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0592997Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0593265Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:09.0593752Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:09.0594653Z return mod(**inputs) 2025-08-14T22:00:09.0595113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 823, in forward 2025-08-14T22:00:09.0595558Z outputs = self.model.decoder( 2025-08-14T22:00:09.0596065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 644, in forward 2025-08-14T22:00:09.0596511Z layer_outputs = decoder_layer( 2025-08-14T22:00:09.0596900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:09.0597303Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:09.0597797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 401, in forward 2025-08-14T22:00:09.0598381Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:09.0598839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:09.0599242Z return self.act(input) 2025-08-14T22:00:09.0599377Z 2025-08-14T22:00:09.0599474Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0599720Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0599957Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0600183Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0600416Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0600649Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0600874Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0601111Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0601347Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0601580Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0601808Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0602079Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:09.0602498Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:09.0602871Z return mod(**inputs) 2025-08-14T22:00:09.0603288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 823, in forward 2025-08-14T22:00:09.0603730Z outputs = self.model.decoder( 2025-08-14T22:00:09.0604163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 644, in forward 2025-08-14T22:00:09.0604600Z layer_outputs = decoder_layer( 2025-08-14T22:00:09.0605004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:09.0605422Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:09.0605861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 401, in forward 2025-08-14T22:00:09.0606452Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:09.0606913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:09.0607278Z return self.act(input) 2025-08-14T22:00:09.0607398Z 2025-08-14T22:00:09.0607483Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0607754Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0607967Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0608189Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0608411Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0608623Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0608843Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0609063Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0609276Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0609498Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0609754Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0610124Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:09.0610545Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:09.0610910Z return mod(**inputs) 2025-08-14T22:00:09.0611310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 823, in forward 2025-08-14T22:00:09.0611754Z outputs = self.model.decoder( 2025-08-14T22:00:09.0612162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 644, in forward 2025-08-14T22:00:09.0612576Z layer_outputs = decoder_layer( 2025-08-14T22:00:09.0612980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:09.0613417Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:09.0614051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 401, in forward 2025-08-14T22:00:09.0614524Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:09.0614944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:09.0615341Z return self.act(input) 2025-08-14T22:00:09.0615513Z 2025-08-14T22:00:09.0615641Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0615962Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0616239Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0616472Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0616697Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0616917Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0617189Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0617443Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0617660Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0617888Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0618119Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0618367Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:09.0618763Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:09.0619122Z return mod(**inputs) 2025-08-14T22:00:09.0619898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 823, in forward 2025-08-14T22:00:09.0620371Z outputs = self.model.decoder( 2025-08-14T22:00:09.0620792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 644, in forward 2025-08-14T22:00:09.0621222Z layer_outputs = decoder_layer( 2025-08-14T22:00:09.0621601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:09.0622005Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:09.0622420Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 401, in forward 2025-08-14T22:00:09.0622890Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:09.0623306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:09.0623679Z return self.act(input) 2025-08-14T22:00:09.0623799Z 2025-08-14T22:00:09.0623894Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0624118Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0624345Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0624569Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0624795Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0625013Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0625236Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0625461Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0625677Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0625954Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0626182Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0626431Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:09.0626825Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:09.0627214Z return mod(**inputs) 2025-08-14T22:00:09.0627615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 823, in forward 2025-08-14T22:00:09.0628039Z outputs = self.model.decoder( 2025-08-14T22:00:09.0628491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 644, in forward 2025-08-14T22:00:09.0628919Z layer_outputs = decoder_layer( 2025-08-14T22:00:09.0629322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:09.0629720Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:09.0630146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 401, in forward 2025-08-14T22:00:09.0630614Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:09.0631029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:09.0631399Z return self.act(input) 2025-08-14T22:00:09.0631519Z 2025-08-14T22:00:09.0631615Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0631835Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0632062Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0632290Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0632516Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0632734Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0632962Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0633191Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0633410Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0633639Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0633866Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0634208Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:09.0634617Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:09.0634976Z return mod(**inputs) 2025-08-14T22:00:09.0635384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 823, in forward 2025-08-14T22:00:09.0635790Z outputs = self.model.decoder( 2025-08-14T22:00:09.0636197Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 644, in forward 2025-08-14T22:00:09.0636611Z layer_outputs = decoder_layer( 2025-08-14T22:00:09.0636978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:09.0637360Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:09.0637772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 401, in forward 2025-08-14T22:00:09.0638232Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:09.0638630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:09.0638990Z return self.act(input) 2025-08-14T22:00:09.0639105Z 2025-08-14T22:00:09.0639195Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0639410Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0639629Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0639851Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0640077Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0640293Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0640558Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0640779Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0640992Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0641207Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0641448Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0641688Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:09.0642378Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:09.0642735Z return mod(**inputs) 2025-08-14T22:00:09.0643264Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 823, in forward 2025-08-14T22:00:09.0643691Z outputs = self.model.decoder( 2025-08-14T22:00:09.0644156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 644, in forward 2025-08-14T22:00:09.0644640Z layer_outputs = decoder_layer( 2025-08-14T22:00:09.0645025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:09.0645430Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:09.0645848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 401, in forward 2025-08-14T22:00:09.0646308Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:09.0646714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:09.0647081Z return self.act(input) 2025-08-14T22:00:09.0647197Z 2025-08-14T22:00:09.0647291Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0647506Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0647727Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0647949Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0648170Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0648382Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0648606Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0648833Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0649051Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0649280Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0649513Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0649757Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:09.0650147Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:09.0650495Z return mod(**inputs) 2025-08-14T22:00:09.0650880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 823, in forward 2025-08-14T22:00:09.0651300Z outputs = self.model.decoder( 2025-08-14T22:00:09.0651723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 644, in forward 2025-08-14T22:00:09.0652150Z layer_outputs = decoder_layer( 2025-08-14T22:00:09.0652523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:09.0652947Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:09.0653366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 401, in forward 2025-08-14T22:00:09.0653837Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:09.0654244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:09.0654617Z return self.act(input) 2025-08-14T22:00:09.0654735Z 2025-08-14T22:00:09.0654825Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0655105Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0655347Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0655606Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0655831Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0656045Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0656263Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0656482Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0656740Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0656953Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0657172Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0657426Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:09.0657808Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:09.0658182Z return mod(**inputs) 2025-08-14T22:00:09.0658575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 823, in forward 2025-08-14T22:00:09.0659010Z outputs = self.model.decoder( 2025-08-14T22:00:09.0659413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 644, in forward 2025-08-14T22:00:09.0660006Z layer_outputs = decoder_layer( 2025-08-14T22:00:09.0660401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:09.0660810Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:09.0661232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 401, in forward 2025-08-14T22:00:09.0661696Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:09.0662119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:09.0662480Z return self.act(input) 2025-08-14T22:00:09.0662608Z 2025-08-14T22:00:09.0662692Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0662919Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0663135Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0663356Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0663577Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0663789Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0664011Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0664228Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0664446Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0664660Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0664870Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0665105Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:09.0665487Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:09.0665843Z return mod(**inputs) 2025-08-14T22:00:09.0666210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 823, in forward 2025-08-14T22:00:09.0666596Z outputs = self.model.decoder( 2025-08-14T22:00:09.0666983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 644, in forward 2025-08-14T22:00:09.0667376Z layer_outputs = decoder_layer( 2025-08-14T22:00:09.0667727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:09.0668086Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:09.0668482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 401, in forward 2025-08-14T22:00:09.0668921Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:09.0669317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:09.0669655Z return self.act(input) 2025-08-14T22:00:09.0669774Z 2025-08-14T22:00:09.0669854Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0670100Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0670305Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0670513Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0670721Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0670950Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0671238Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0671448Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0671652Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0671854Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0672062Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0672336Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:09.0672738Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:09.0673103Z return mod(**inputs) 2025-08-14T22:00:09.0673472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 823, in forward 2025-08-14T22:00:09.0673860Z outputs = self.model.decoder( 2025-08-14T22:00:09.0674245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 644, in forward 2025-08-14T22:00:09.0674640Z layer_outputs = decoder_layer( 2025-08-14T22:00:09.0674989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:09.0675345Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:09.0675736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 401, in forward 2025-08-14T22:00:09.0676177Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:09.0676569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:09.0676903Z return self.act(input) 2025-08-14T22:00:09.0677022Z 2025-08-14T22:00:09.0677104Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0677319Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0677523Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0677728Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0677937Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0678136Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0678342Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0678549Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0678755Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0678950Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0679157Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0679393Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:09.0679750Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:09.0680076Z return mod(**inputs) 2025-08-14T22:00:09.0680446Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 823, in forward 2025-08-14T22:00:09.0680832Z outputs = self.model.decoder( 2025-08-14T22:00:09.0681218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 644, in forward 2025-08-14T22:00:09.0681612Z layer_outputs = decoder_layer( 2025-08-14T22:00:09.0681958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:09.0682315Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:09.0682707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 401, in forward 2025-08-14T22:00:09.0683140Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:09.0683563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:09.0683897Z return self.act(input) 2025-08-14T22:00:09.0684014Z 2025-08-14T22:00:09.0684094Z cudagraph partition due to non gpu ops 2025-08-14T22:00:09.0684330Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:09.0684713Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:09.0685039Z return mod(**inputs) 2025-08-14T22:00:09.0685407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 839, in forward 2025-08-14T22:00:09.0685819Z logits = self.output_projection(outputs[0]) 2025-08-14T22:00:09.0685964Z 2025-08-14T22:00:09.0686089Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:09.0686464Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:09.0686814Z return mod(**inputs) 2025-08-14T22:00:09.0687183Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 844, in forward 2025-08-14T22:00:09.0687650Z loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1)) 2025-08-14T22:00:09.0687864Z 2025-08-14T22:00:18.8060746Z Compilation time (from dynamo_timed): 18.367619841 2025-08-14T22:00:18.8089460Z pass 2025-08-14T22:00:18.8090107Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:00:18.8091292Z TIMING: _recursive_pre_grad_passes:0.04412 _recursive_joint_graph_passes:0.79338 _recursive_post_grad_passes:0.08034 async_compile.wait:0.88243 code_gen:9.59164 inductor_compile:11.31635 backend_compile:15.98421 gc:0.00196 entire_frame_compile:18.36762 total_wall_time:18.36762 2025-08-14T22:00:18.8092486Z STATS: call_* op count: 443 | FakeTensorMode.__torch_dispatch__:26118 | FakeTensor.__torch_dispatch__:3895 | ProxyTorchDispatchMode.__torch_dispatch__:6287 2025-08-14T22:00:18.8093142Z Dynamo produced 1 graphs covering 443 ops with 0 graph breaks (0 unique) 2025-08-14T22:00:24.6594956Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T22:00:24.6596212Z from pkg_resources import resource_filename 2025-08-14T22:00:25.2607391Z 2025-08-14T22:00:32.0396354Z loading model: 0it [00:00, ?it/s] 2025-08-14T22:00:32.0396903Z loading model: 0it [00:06, ?it/s] 2025-08-14T22:00:32.0425732Z cpu eval XGLMForCausalLM 2025-08-14T22:00:32.4373180Z WARNING:common:fp64 golden ref were not generated for XGLMForCausalLM. Setting accuracy check to cosine 2025-08-14T22:00:32.5403389Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:00:33.1220839Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:00:33.7126809Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:00:53.3958050Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.3958510Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.3958919Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.3959437Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.3960011Z return mod(**inputs) 2025-08-14T22:00:53.3960492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.3960914Z outputs = self.model( 2025-08-14T22:00:53.3961337Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.3961782Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.3962588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.3963124Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.3963569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.3964152Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.3969889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:00:53.3970435Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:00:53.3970990Z 2025-08-14T22:00:53.3971141Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.3971561Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.3972044Z return mod(**inputs) 2025-08-14T22:00:53.3972465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.3972917Z outputs = self.model( 2025-08-14T22:00:53.3973326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.3973773Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.3974174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.3974586Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.3975032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.3975507Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.3990914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:00:53.3991609Z key_states = self.k_proj(current_states) 2025-08-14T22:00:53.3991796Z 2025-08-14T22:00:53.3991930Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.3992358Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.3992753Z return mod(**inputs) 2025-08-14T22:00:53.3993159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.3993590Z outputs = self.model( 2025-08-14T22:00:53.3993995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.3994431Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.3994823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.3995234Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.3995696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.3996132Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.3996571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:00:53.3997265Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:00:53.3997471Z 2025-08-14T22:00:53.3997576Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.3997843Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.3998247Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.3998603Z return mod(**inputs) 2025-08-14T22:00:53.3998988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.3999403Z outputs = self.model( 2025-08-14T22:00:53.3999957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4000397Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4000790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4001246Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4001669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4002117Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4002641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:00:53.4003080Z value_states = self.v_proj(current_states) 2025-08-14T22:00:53.4003270Z 2025-08-14T22:00:53.4003397Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4003788Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4004166Z return mod(**inputs) 2025-08-14T22:00:53.4004567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4005004Z outputs = self.model( 2025-08-14T22:00:53.4005409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4005837Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4006227Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4006619Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4007041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4007496Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4007942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:00:53.4008383Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:00:53.4008565Z 2025-08-14T22:00:53.4008679Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4009079Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4009433Z return mod(**inputs) 2025-08-14T22:00:53.4009829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4010249Z outputs = self.model( 2025-08-14T22:00:53.4010642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4011057Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4011445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4011844Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4012274Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4012720Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4013163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:00:53.4013645Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:00:53.4013842Z 2025-08-14T22:00:53.4013935Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4014173Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4014435Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4014834Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4015255Z return mod(**inputs) 2025-08-14T22:00:53.4015651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4016083Z outputs = self.model( 2025-08-14T22:00:53.4016491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4016941Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4017432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4017839Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4018282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:00:53.4018780Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:53.4019215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:53.4019795Z return self.act(input) 2025-08-14T22:00:53.4019932Z 2025-08-14T22:00:53.4020026Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4020264Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4020506Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4020765Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4021172Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4021539Z return mod(**inputs) 2025-08-14T22:00:53.4021931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4022347Z outputs = self.model( 2025-08-14T22:00:53.4022738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4023156Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4023532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4023934Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4024374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4024815Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4025241Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:00:53.4025698Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:00:53.4025879Z 2025-08-14T22:00:53.4025999Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4026375Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4026751Z return mod(**inputs) 2025-08-14T22:00:53.4027133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4027537Z outputs = self.model( 2025-08-14T22:00:53.4027919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4028328Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4028698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4029086Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4029498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4029931Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4030375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:00:53.4030839Z key_states = self.k_proj(current_states) 2025-08-14T22:00:53.4030997Z 2025-08-14T22:00:53.4031111Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4031508Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4031894Z return mod(**inputs) 2025-08-14T22:00:53.4032282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4032685Z outputs = self.model( 2025-08-14T22:00:53.4033075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4033499Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4033885Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4034306Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4034726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4035163Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4035614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:00:53.4036098Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:00:53.4036305Z 2025-08-14T22:00:53.4036402Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4036657Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4037052Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4037408Z return mod(**inputs) 2025-08-14T22:00:53.4037790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4038205Z outputs = self.model( 2025-08-14T22:00:53.4038609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4039024Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4039396Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4039793Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4040210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4040647Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4041092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:00:53.4041524Z value_states = self.v_proj(current_states) 2025-08-14T22:00:53.4041679Z 2025-08-14T22:00:53.4042138Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4042548Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4042910Z return mod(**inputs) 2025-08-14T22:00:53.4043309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4043730Z outputs = self.model( 2025-08-14T22:00:53.4044113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4044530Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4044923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4045310Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4045732Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4046281Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4046711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:00:53.4047134Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:00:53.4047341Z 2025-08-14T22:00:53.4047449Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4047837Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4048181Z return mod(**inputs) 2025-08-14T22:00:53.4048551Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4048980Z outputs = self.model( 2025-08-14T22:00:53.4049357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4049781Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4050153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4050536Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4050939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4051359Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4051784Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:00:53.4052243Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:00:53.4052433Z 2025-08-14T22:00:53.4052522Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4052750Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4053004Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4053387Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4053727Z return mod(**inputs) 2025-08-14T22:00:53.4054104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4054505Z outputs = self.model( 2025-08-14T22:00:53.4054882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4055290Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4055659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4056047Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4056450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:00:53.4056911Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:53.4057330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:53.4057685Z return self.act(input) 2025-08-14T22:00:53.4057811Z 2025-08-14T22:00:53.4057894Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4058120Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4058341Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4058583Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4058965Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4059309Z return mod(**inputs) 2025-08-14T22:00:53.4059829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4060264Z outputs = self.model( 2025-08-14T22:00:53.4060659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4061159Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4061527Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4061910Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4062336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4062759Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4063190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:00:53.4063655Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:00:53.4063839Z 2025-08-14T22:00:53.4063961Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4064372Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4064718Z return mod(**inputs) 2025-08-14T22:00:53.4065096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4065492Z outputs = self.model( 2025-08-14T22:00:53.4065856Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4066261Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4066606Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4066957Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4067336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4067742Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4068160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:00:53.4068633Z key_states = self.k_proj(current_states) 2025-08-14T22:00:53.4068777Z 2025-08-14T22:00:53.4068878Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4069238Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4069561Z return mod(**inputs) 2025-08-14T22:00:53.4069927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4070327Z outputs = self.model( 2025-08-14T22:00:53.4070704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4071153Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4071504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4071870Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4072252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4072652Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4073057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:00:53.4073497Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:00:53.4073686Z 2025-08-14T22:00:53.4073766Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4074004Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4074365Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4074695Z return mod(**inputs) 2025-08-14T22:00:53.4075047Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4075421Z outputs = self.model( 2025-08-14T22:00:53.4075803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4076183Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4076538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4076919Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4077300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4077702Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4078126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:00:53.4078544Z value_states = self.v_proj(current_states) 2025-08-14T22:00:53.4078688Z 2025-08-14T22:00:53.4078801Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4079159Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4079488Z return mod(**inputs) 2025-08-14T22:00:53.4079847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4080231Z outputs = self.model( 2025-08-14T22:00:53.4080599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4080985Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4081340Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4081703Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4082096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4082514Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4082913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:00:53.4083331Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:00:53.4083501Z 2025-08-14T22:00:53.4083611Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4083997Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4084344Z return mod(**inputs) 2025-08-14T22:00:53.4084727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4085130Z outputs = self.model( 2025-08-14T22:00:53.4085500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4085907Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4086281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4086666Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4087065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4087497Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4087924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:00:53.4088387Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:00:53.4088580Z 2025-08-14T22:00:53.4088671Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4088907Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4089166Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4089572Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4089934Z return mod(**inputs) 2025-08-14T22:00:53.4090317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4090745Z outputs = self.model( 2025-08-14T22:00:53.4091141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4091545Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4091914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4092305Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4092716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:00:53.4093192Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:53.4093607Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:53.4093963Z return self.act(input) 2025-08-14T22:00:53.4094089Z 2025-08-14T22:00:53.4094176Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4094405Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4094619Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4094869Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4095253Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4095594Z return mod(**inputs) 2025-08-14T22:00:53.4095962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4096367Z outputs = self.model( 2025-08-14T22:00:53.4096754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4097158Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4097527Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4097910Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4098321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4098754Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4099179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:00:53.4099747Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:00:53.4099940Z 2025-08-14T22:00:53.4100072Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4100475Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4100845Z return mod(**inputs) 2025-08-14T22:00:53.4101233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4101632Z outputs = self.model( 2025-08-14T22:00:53.4102017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4102430Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4102809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4103198Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4103619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4104076Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4104515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:00:53.4104988Z key_states = self.k_proj(current_states) 2025-08-14T22:00:53.4105157Z 2025-08-14T22:00:53.4105272Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4105673Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4106048Z return mod(**inputs) 2025-08-14T22:00:53.4106443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4106864Z outputs = self.model( 2025-08-14T22:00:53.4107277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4107704Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4108067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4108454Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4108836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4109248Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4109644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:00:53.4110088Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:00:53.4110278Z 2025-08-14T22:00:53.4110360Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4110595Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4110955Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4111272Z return mod(**inputs) 2025-08-14T22:00:53.4111627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4111997Z outputs = self.model( 2025-08-14T22:00:53.4112345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4112722Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4113074Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4113438Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4113815Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4114224Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4114639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:00:53.4115039Z value_states = self.v_proj(current_states) 2025-08-14T22:00:53.4115181Z 2025-08-14T22:00:53.4115285Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4115662Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4115985Z return mod(**inputs) 2025-08-14T22:00:53.4116333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4116695Z outputs = self.model( 2025-08-14T22:00:53.4117045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4117419Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4117754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4118120Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4118502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4118930Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4119318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:00:53.4119714Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:00:53.4119882Z 2025-08-14T22:00:53.4119992Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4120356Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4120689Z return mod(**inputs) 2025-08-14T22:00:53.4121066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4121438Z outputs = self.model( 2025-08-14T22:00:53.4121780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4122187Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4122539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4122906Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4123283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4123695Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4124102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:00:53.4124532Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:00:53.4124718Z 2025-08-14T22:00:53.4124803Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4125023Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4125263Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4125648Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4125968Z return mod(**inputs) 2025-08-14T22:00:53.4126326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4126713Z outputs = self.model( 2025-08-14T22:00:53.4127069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4127449Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4127800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4128165Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4128548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:00:53.4128968Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:53.4129363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:53.4129712Z return self.act(input) 2025-08-14T22:00:53.4129824Z 2025-08-14T22:00:53.4129904Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4130123Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4130333Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4130559Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4130920Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4131249Z return mod(**inputs) 2025-08-14T22:00:53.4131608Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4131978Z outputs = self.model( 2025-08-14T22:00:53.4132337Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4132752Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4133097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4133460Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4133873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4134281Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4134677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:00:53.4135124Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:00:53.4135299Z 2025-08-14T22:00:53.4135406Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4135792Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4136113Z return mod(**inputs) 2025-08-14T22:00:53.4136472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4136856Z outputs = self.model( 2025-08-14T22:00:53.4137204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4137588Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4137940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4138310Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4138717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4139150Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4139720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:00:53.4140176Z key_states = self.k_proj(current_states) 2025-08-14T22:00:53.4140327Z 2025-08-14T22:00:53.4140440Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4140847Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4141206Z return mod(**inputs) 2025-08-14T22:00:53.4141588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4142176Z outputs = self.model( 2025-08-14T22:00:53.4142579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4143007Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4143380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4143779Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4144210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4144671Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4145119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:00:53.4145602Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:00:53.4145807Z 2025-08-14T22:00:53.4145903Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4146150Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4146541Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4146892Z return mod(**inputs) 2025-08-14T22:00:53.4147280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4147788Z outputs = self.model( 2025-08-14T22:00:53.4148178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4148595Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4148994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4149387Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4149799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4150281Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4150712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:00:53.4151169Z value_states = self.v_proj(current_states) 2025-08-14T22:00:53.4151319Z 2025-08-14T22:00:53.4151445Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4151821Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4152171Z return mod(**inputs) 2025-08-14T22:00:53.4152553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4152949Z outputs = self.model( 2025-08-14T22:00:53.4153314Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4153715Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4154086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4154473Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4154882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4155315Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4155743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:00:53.4156174Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:00:53.4156346Z 2025-08-14T22:00:53.4156456Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4156842Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4157193Z return mod(**inputs) 2025-08-14T22:00:53.4157566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4157967Z outputs = self.model( 2025-08-14T22:00:53.4158323Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4158698Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4159048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4159404Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4159790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4160190Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4160589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:00:53.4161026Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:00:53.4161208Z 2025-08-14T22:00:53.4161298Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4161505Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4161743Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4162150Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4162474Z return mod(**inputs) 2025-08-14T22:00:53.4162838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4163252Z outputs = self.model( 2025-08-14T22:00:53.4163605Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4163990Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4164368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4164792Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4165252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:00:53.4165700Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:53.4166099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:53.4166439Z return self.act(input) 2025-08-14T22:00:53.4166553Z 2025-08-14T22:00:53.4166635Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4166855Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4167066Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4167292Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4167656Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4167983Z return mod(**inputs) 2025-08-14T22:00:53.4168339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4168720Z outputs = self.model( 2025-08-14T22:00:53.4169075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4169454Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4169832Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4170221Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4170635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4171063Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4171495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:00:53.4171945Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:00:53.4172120Z 2025-08-14T22:00:53.4172239Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4172618Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4172966Z return mod(**inputs) 2025-08-14T22:00:53.4173342Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4173741Z outputs = self.model( 2025-08-14T22:00:53.4174112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4174516Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4174889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4175273Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4175680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4176113Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4176573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:00:53.4176990Z key_states = self.k_proj(current_states) 2025-08-14T22:00:53.4177142Z 2025-08-14T22:00:53.4177249Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4177632Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4178012Z return mod(**inputs) 2025-08-14T22:00:53.4178388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4178786Z outputs = self.model( 2025-08-14T22:00:53.4179182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4179673Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4180106Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4180509Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4180936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4181380Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4181812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:00:53.4182285Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:00:53.4182486Z 2025-08-14T22:00:53.4182582Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4182830Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4183221Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4183576Z return mod(**inputs) 2025-08-14T22:00:53.4183954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4184364Z outputs = self.model( 2025-08-14T22:00:53.4184749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4185163Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4185534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4185925Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4186336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4186764Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4187206Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:00:53.4187631Z value_states = self.v_proj(current_states) 2025-08-14T22:00:53.4187783Z 2025-08-14T22:00:53.4187900Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4188276Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4188628Z return mod(**inputs) 2025-08-14T22:00:53.4188987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4189363Z outputs = self.model( 2025-08-14T22:00:53.4189727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4190112Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4190466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4190828Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4191220Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4191669Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4192066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:00:53.4192473Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:00:53.4192652Z 2025-08-14T22:00:53.4192755Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4193117Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4193441Z return mod(**inputs) 2025-08-14T22:00:53.4193819Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4194200Z outputs = self.model( 2025-08-14T22:00:53.4194557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4194956Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4195320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4195690Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4196076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4196504Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4196931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:00:53.4197415Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:00:53.4197609Z 2025-08-14T22:00:53.4197701Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4197938Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4198202Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4198592Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4198948Z return mod(**inputs) 2025-08-14T22:00:53.4199335Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4199730Z outputs = self.model( 2025-08-14T22:00:53.4200090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4200489Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4200861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4201257Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4201664Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:00:53.4202129Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:53.4202557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:53.4202922Z return self.act(input) 2025-08-14T22:00:53.4203053Z 2025-08-14T22:00:53.4203140Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4203378Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4203607Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4203856Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4204254Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4204606Z return mod(**inputs) 2025-08-14T22:00:53.4204999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4205422Z outputs = self.model( 2025-08-14T22:00:53.4205824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4206280Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4206652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4207044Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4207476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4207921Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4208374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:00:53.4208878Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:00:53.4209057Z 2025-08-14T22:00:53.4209175Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4209582Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4209930Z return mod(**inputs) 2025-08-14T22:00:53.4210307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4210704Z outputs = self.model( 2025-08-14T22:00:53.4211094Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4211502Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4211868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4212255Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4212670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4213113Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4213543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:00:53.4213958Z key_states = self.k_proj(current_states) 2025-08-14T22:00:53.4214111Z 2025-08-14T22:00:53.4214219Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4214598Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4214935Z return mod(**inputs) 2025-08-14T22:00:53.4215313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4215719Z outputs = self.model( 2025-08-14T22:00:53.4216098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4216497Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4216867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4217246Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4217641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4218079Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4218507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:00:53.4218983Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:00:53.4219178Z 2025-08-14T22:00:53.4219265Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4219517Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4219994Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4220358Z return mod(**inputs) 2025-08-14T22:00:53.4220754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4221209Z outputs = self.model( 2025-08-14T22:00:53.4221595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4221994Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4222381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4222769Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4223169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4223620Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4224049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:00:53.4224508Z value_states = self.v_proj(current_states) 2025-08-14T22:00:53.4224658Z 2025-08-14T22:00:53.4224777Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4225150Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4225488Z return mod(**inputs) 2025-08-14T22:00:53.4225848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4226220Z outputs = self.model( 2025-08-14T22:00:53.4226580Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4226962Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4227304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4227670Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4228056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4228470Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4228866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:00:53.4229276Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:00:53.4229429Z 2025-08-14T22:00:53.4229538Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4229901Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4230224Z return mod(**inputs) 2025-08-14T22:00:53.4230585Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4230964Z outputs = self.model( 2025-08-14T22:00:53.4231315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4231700Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4232048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4232407Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4232786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4233193Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4233599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:00:53.4234032Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:00:53.4234219Z 2025-08-14T22:00:53.4234301Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4234517Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4234759Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4235162Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4235493Z return mod(**inputs) 2025-08-14T22:00:53.4235853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4236243Z outputs = self.model( 2025-08-14T22:00:53.4236605Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4236986Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4237343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4237709Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4238089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:00:53.4238527Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:53.4238912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:53.4239240Z return self.act(input) 2025-08-14T22:00:53.4239359Z 2025-08-14T22:00:53.4239437Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4239647Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4239843Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4240071Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4240422Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4240731Z return mod(**inputs) 2025-08-14T22:00:53.4241078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4241448Z outputs = self.model( 2025-08-14T22:00:53.4241968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4242366Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4242716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4243079Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4243469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4243869Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4244277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:00:53.4244703Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:00:53.4244870Z 2025-08-14T22:00:53.4244976Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4245355Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4245676Z return mod(**inputs) 2025-08-14T22:00:53.4246024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4246386Z outputs = self.model( 2025-08-14T22:00:53.4246742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4247116Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4247449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4247802Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4248177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4248574Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4249021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:00:53.4249403Z key_states = self.k_proj(current_states) 2025-08-14T22:00:53.4249535Z 2025-08-14T22:00:53.4249642Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4250023Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4250336Z return mod(**inputs) 2025-08-14T22:00:53.4250691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4251065Z outputs = self.model( 2025-08-14T22:00:53.4251439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4251815Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4252188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4252543Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4252908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4253308Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4253701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:00:53.4254120Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:00:53.4254309Z 2025-08-14T22:00:53.4254388Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4254620Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4254969Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4255279Z return mod(**inputs) 2025-08-14T22:00:53.4255627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4255991Z outputs = self.model( 2025-08-14T22:00:53.4256329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4256700Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4257037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4257389Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4257763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4258171Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4258573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:00:53.4258967Z value_states = self.v_proj(current_states) 2025-08-14T22:00:53.4259107Z 2025-08-14T22:00:53.4259210Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4259628Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4259968Z return mod(**inputs) 2025-08-14T22:00:53.4260343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4260775Z outputs = self.model( 2025-08-14T22:00:53.4261153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4261568Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4261934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4262309Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4262721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4263126Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4263533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:00:53.4263964Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:00:53.4264116Z 2025-08-14T22:00:53.4264226Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4264583Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4264915Z return mod(**inputs) 2025-08-14T22:00:53.4265304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4265686Z outputs = self.model( 2025-08-14T22:00:53.4266054Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4266437Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4266787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4267141Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4267527Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4267932Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4268337Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:00:53.4268766Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:00:53.4268953Z 2025-08-14T22:00:53.4269034Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4269252Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4269484Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4269845Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4270170Z return mod(**inputs) 2025-08-14T22:00:53.4270527Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4270894Z outputs = self.model( 2025-08-14T22:00:53.4271249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4271629Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4271974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4272346Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4272716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:00:53.4273130Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:53.4273509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:53.4273838Z return self.act(input) 2025-08-14T22:00:53.4273945Z 2025-08-14T22:00:53.4274030Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4274239Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4274437Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4274666Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4275014Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4275322Z return mod(**inputs) 2025-08-14T22:00:53.4275673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4276042Z outputs = self.model( 2025-08-14T22:00:53.4276408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4276784Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4277124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4277503Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4277873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4278274Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4278671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:00:53.4279105Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:00:53.4279270Z 2025-08-14T22:00:53.4279424Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4279786Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4280116Z return mod(**inputs) 2025-08-14T22:00:53.4280466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4280846Z outputs = self.model( 2025-08-14T22:00:53.4281202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4281580Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4281919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4282280Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4282660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4283062Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4283467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:00:53.4283856Z key_states = self.k_proj(current_states) 2025-08-14T22:00:53.4283993Z 2025-08-14T22:00:53.4284104Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4284461Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4284788Z return mod(**inputs) 2025-08-14T22:00:53.4285147Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4285601Z outputs = self.model( 2025-08-14T22:00:53.4285950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4286331Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4286682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4287039Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4287420Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4287818Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4288219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:00:53.4288650Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:00:53.4288842Z 2025-08-14T22:00:53.4288924Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4289162Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4289513Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4289837Z return mod(**inputs) 2025-08-14T22:00:53.4290212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4290575Z outputs = self.model( 2025-08-14T22:00:53.4290916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4291309Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4291652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4292006Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4292372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4292787Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4293181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:00:53.4293582Z value_states = self.v_proj(current_states) 2025-08-14T22:00:53.4293727Z 2025-08-14T22:00:53.4293829Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4294181Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4294499Z return mod(**inputs) 2025-08-14T22:00:53.4294846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4295221Z outputs = self.model( 2025-08-14T22:00:53.4295574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4295947Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4296300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4296664Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4297049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4297445Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4297846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:00:53.4298248Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:00:53.4298400Z 2025-08-14T22:00:53.4298507Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4298855Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4299177Z return mod(**inputs) 2025-08-14T22:00:53.4299632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4300056Z outputs = self.model( 2025-08-14T22:00:53.4300466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4300899Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4301253Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4301612Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4302001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4302411Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4302808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:00:53.4303244Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:00:53.4303430Z 2025-08-14T22:00:53.4303512Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4303730Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4303960Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4304346Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4304675Z return mod(**inputs) 2025-08-14T22:00:53.4305026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4305423Z outputs = self.model( 2025-08-14T22:00:53.4305784Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4306167Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4306529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4306890Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4307276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:00:53.4307724Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:53.4308105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:53.4308447Z return self.act(input) 2025-08-14T22:00:53.4308558Z 2025-08-14T22:00:53.4308649Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4308855Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4309064Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4309298Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4309653Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4309982Z return mod(**inputs) 2025-08-14T22:00:53.4310339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4310721Z outputs = self.model( 2025-08-14T22:00:53.4311076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4311450Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4311791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4312150Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4312520Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4312916Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4313312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:00:53.4313713Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:00:53.4313884Z 2025-08-14T22:00:53.4313988Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4314341Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4314658Z return mod(**inputs) 2025-08-14T22:00:53.4314997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4315361Z outputs = self.model( 2025-08-14T22:00:53.4315709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4316075Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4316417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4316770Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4317147Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4317539Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4317953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:00:53.4318336Z key_states = self.k_proj(current_states) 2025-08-14T22:00:53.4318469Z 2025-08-14T22:00:53.4318576Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4319975Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4320299Z return mod(**inputs) 2025-08-14T22:00:53.4320645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4321004Z outputs = self.model( 2025-08-14T22:00:53.4321383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4321763Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4322125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4322469Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4322839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4323235Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4323620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:00:53.4324066Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:00:53.4324248Z 2025-08-14T22:00:53.4324325Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4324554Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4324888Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4325199Z return mod(**inputs) 2025-08-14T22:00:53.4325535Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4325883Z outputs = self.model( 2025-08-14T22:00:53.4326218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4326580Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4326908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4327242Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4327604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4327991Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4328375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:00:53.4328747Z value_states = self.v_proj(current_states) 2025-08-14T22:00:53.4328888Z 2025-08-14T22:00:53.4328987Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4329328Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4329629Z return mod(**inputs) 2025-08-14T22:00:53.4329969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4330323Z outputs = self.model( 2025-08-14T22:00:53.4330664Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4331019Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4331348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4331692Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4332065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4332460Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4332843Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:00:53.4333252Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:00:53.4333401Z 2025-08-14T22:00:53.4333500Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4333853Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4334169Z return mod(**inputs) 2025-08-14T22:00:53.4334543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4334918Z outputs = self.model( 2025-08-14T22:00:53.4335296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4335682Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4336022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4336383Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4336770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4337178Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4337573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:00:53.4338062Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:00:53.4338247Z 2025-08-14T22:00:53.4338342Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4338571Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4338814Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4339197Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4339611Z return mod(**inputs) 2025-08-14T22:00:53.4339998Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4340405Z outputs = self.model( 2025-08-14T22:00:53.4340793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4341209Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4341584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4342158Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4342574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:00:53.4343030Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:53.4343453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:53.4343821Z return self.act(input) 2025-08-14T22:00:53.4343940Z 2025-08-14T22:00:53.4344037Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4344255Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4344479Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4344732Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4345113Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4345465Z return mod(**inputs) 2025-08-14T22:00:53.4345846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4346249Z outputs = self.model( 2025-08-14T22:00:53.4346682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4347090Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4347463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4347836Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4348214Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4348616Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4349047Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:00:53.4349448Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:00:53.4349616Z 2025-08-14T22:00:53.4349744Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4350098Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4350408Z return mod(**inputs) 2025-08-14T22:00:53.4350757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4351123Z outputs = self.model( 2025-08-14T22:00:53.4351471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4351837Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4352175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4352528Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4352892Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4353289Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4353685Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:00:53.4354065Z key_states = self.k_proj(current_states) 2025-08-14T22:00:53.4354198Z 2025-08-14T22:00:53.4354299Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4354650Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4354968Z return mod(**inputs) 2025-08-14T22:00:53.4355316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4355676Z outputs = self.model( 2025-08-14T22:00:53.4356024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4356396Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4356725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4357097Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4357479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4357885Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4358289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:00:53.4358718Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:00:53.4358903Z 2025-08-14T22:00:53.4358993Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4359235Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4359587Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4359914Z return mod(**inputs) 2025-08-14T22:00:53.4360287Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4360663Z outputs = self.model( 2025-08-14T22:00:53.4361024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4361433Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4361772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4362114Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4362488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4362910Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4363309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:00:53.4363725Z value_states = self.v_proj(current_states) 2025-08-14T22:00:53.4363731Z 2025-08-14T22:00:53.4363840Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4364038Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4364106Z return mod(**inputs) 2025-08-14T22:00:53.4364352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4364418Z outputs = self.model( 2025-08-14T22:00:53.4364655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4364736Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4364953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4365040Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4365278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4365374Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4365618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:00:53.4365717Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:00:53.4365721Z 2025-08-14T22:00:53.4365829Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4366021Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4366085Z return mod(**inputs) 2025-08-14T22:00:53.4366329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4366395Z outputs = self.model( 2025-08-14T22:00:53.4366630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4366709Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4366920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4367003Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4367240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4367334Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4367577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:00:53.4367702Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:00:53.4367705Z 2025-08-14T22:00:53.4367793Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4367870Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4367968Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4368185Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4368251Z return mod(**inputs) 2025-08-14T22:00:53.4368487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4368579Z outputs = self.model( 2025-08-14T22:00:53.4368816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4368887Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4369124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4369204Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4369448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:00:53.4369585Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:53.4369789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:53.4369865Z return self.act(input) 2025-08-14T22:00:53.4369870Z 2025-08-14T22:00:53.4369948Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4370032Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4370107Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4370204Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4370405Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4370469Z return mod(**inputs) 2025-08-14T22:00:53.4370705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4370781Z outputs = self.model( 2025-08-14T22:00:53.4371022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4371099Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4371316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4371395Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4371639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4371734Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4371973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:00:53.4372089Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:00:53.4372094Z 2025-08-14T22:00:53.4372471Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4372677Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4372742Z return mod(**inputs) 2025-08-14T22:00:53.4372979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4373056Z outputs = self.model( 2025-08-14T22:00:53.4373300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4373380Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4373598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4373678Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4373931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4374030Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4374290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:00:53.4374381Z key_states = self.k_proj(current_states) 2025-08-14T22:00:53.4374384Z 2025-08-14T22:00:53.4374486Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4374723Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4374788Z return mod(**inputs) 2025-08-14T22:00:53.4375031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4375106Z outputs = self.model( 2025-08-14T22:00:53.4375366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4375463Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4375683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4375762Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4376013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4376113Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4376360Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:00:53.4376502Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:00:53.4376506Z 2025-08-14T22:00:53.4376587Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4376700Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4376898Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4376967Z return mod(**inputs) 2025-08-14T22:00:53.4377218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4377287Z outputs = self.model( 2025-08-14T22:00:53.4377529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4377612Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4377829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4377914Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4378158Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4378254Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4378506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:00:53.4378594Z value_states = self.v_proj(current_states) 2025-08-14T22:00:53.4378599Z 2025-08-14T22:00:53.4378711Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4378905Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4378974Z return mod(**inputs) 2025-08-14T22:00:53.4379225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4379292Z outputs = self.model( 2025-08-14T22:00:53.4379532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4379703Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4379943Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4380056Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4380345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4380455Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4380735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:00:53.4380860Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:00:53.4380864Z 2025-08-14T22:00:53.4380986Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4381216Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4381287Z return mod(**inputs) 2025-08-14T22:00:53.4381574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4381665Z outputs = self.model( 2025-08-14T22:00:53.4381931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4382018Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4382256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4382346Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4382610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4382713Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4382983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:00:53.4383119Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:00:53.4383123Z 2025-08-14T22:00:53.4383217Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4383300Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4383407Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4383629Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4383707Z return mod(**inputs) 2025-08-14T22:00:53.4383955Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4384032Z outputs = self.model( 2025-08-14T22:00:53.4384282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4384361Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4384586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4384663Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4384919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:00:53.4385040Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:53.4385251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:53.4385329Z return self.act(input) 2025-08-14T22:00:53.4385334Z 2025-08-14T22:00:53.4385413Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4385497Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4385573Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4385673Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4385879Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4385946Z return mod(**inputs) 2025-08-14T22:00:53.4386195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4386269Z outputs = self.model( 2025-08-14T22:00:53.4386533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4386614Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4386831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4386927Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4387177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4387275Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4387534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:00:53.4387656Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:00:53.4387675Z 2025-08-14T22:00:53.4387778Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4387983Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4388048Z return mod(**inputs) 2025-08-14T22:00:53.4388295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4388375Z outputs = self.model( 2025-08-14T22:00:53.4388620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4388698Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4388918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4388998Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4389248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4389346Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4389593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:00:53.4389682Z key_states = self.k_proj(current_states) 2025-08-14T22:00:53.4389685Z 2025-08-14T22:00:53.4389788Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4389994Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4390060Z return mod(**inputs) 2025-08-14T22:00:53.4390304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4390384Z outputs = self.model( 2025-08-14T22:00:53.4390631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4390704Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4390932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4391009Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4391263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4391361Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4391606Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:00:53.4391749Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:00:53.4391753Z 2025-08-14T22:00:53.4391833Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4391945Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4392144Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4392211Z return mod(**inputs) 2025-08-14T22:00:53.4392484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4392554Z outputs = self.model( 2025-08-14T22:00:53.4392798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4392898Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4393119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4393203Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4393464Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4393564Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4393817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:00:53.4393938Z value_states = self.v_proj(current_states) 2025-08-14T22:00:53.4393942Z 2025-08-14T22:00:53.4394061Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4394263Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4394333Z return mod(**inputs) 2025-08-14T22:00:53.4394584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4394651Z outputs = self.model( 2025-08-14T22:00:53.4394897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4394979Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4395198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4395286Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4395533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4395630Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4395889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:00:53.4395984Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:00:53.4395987Z 2025-08-14T22:00:53.4396093Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4396289Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4396354Z return mod(**inputs) 2025-08-14T22:00:53.4396599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4396668Z outputs = self.model( 2025-08-14T22:00:53.4396908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4396989Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4397207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4397294Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4397575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4397674Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4397928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:00:53.4398055Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:00:53.4398059Z 2025-08-14T22:00:53.4398142Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4398230Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4398360Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4398570Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4398636Z return mod(**inputs) 2025-08-14T22:00:53.4398881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4398977Z outputs = self.model( 2025-08-14T22:00:53.4399225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4399297Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4399538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4399618Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4399903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:00:53.4400023Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:53.4400236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:53.4400310Z return self.act(input) 2025-08-14T22:00:53.4400316Z 2025-08-14T22:00:53.4400394Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4400476Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4400553Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4400654Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4400859Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4400929Z return mod(**inputs) 2025-08-14T22:00:53.4401173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4401251Z outputs = self.model( 2025-08-14T22:00:53.4401497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4401576Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4401795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4401874Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4402122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4402220Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4402464Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:00:53.4402587Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:00:53.4402592Z 2025-08-14T22:00:53.4402694Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4402899Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4402966Z return mod(**inputs) 2025-08-14T22:00:53.4403208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4403286Z outputs = self.model( 2025-08-14T22:00:53.4403529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4403600Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4403829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4403907Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4404156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4404255Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4404516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:00:53.4404607Z key_states = self.k_proj(current_states) 2025-08-14T22:00:53.4404611Z 2025-08-14T22:00:53.4404749Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4404952Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4405019Z return mod(**inputs) 2025-08-14T22:00:53.4405260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4405356Z outputs = self.model( 2025-08-14T22:00:53.4405603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4405696Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4405925Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4406004Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4406256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4406356Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4406600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:00:53.4406740Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:00:53.4406744Z 2025-08-14T22:00:53.4406826Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4406935Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4407131Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4407197Z return mod(**inputs) 2025-08-14T22:00:53.4407451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4407520Z outputs = self.model( 2025-08-14T22:00:53.4407760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4407846Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4408068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4408153Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4408394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4408491Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4408740Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:00:53.4408827Z value_states = self.v_proj(current_states) 2025-08-14T22:00:53.4408831Z 2025-08-14T22:00:53.4408933Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4409140Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4409207Z return mod(**inputs) 2025-08-14T22:00:53.4409456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4409525Z outputs = self.model( 2025-08-14T22:00:53.4409764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4409846Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4410070Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4410157Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4410417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4410519Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4410771Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:00:53.4410883Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:00:53.4410886Z 2025-08-14T22:00:53.4410988Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4411195Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4411277Z return mod(**inputs) 2025-08-14T22:00:53.4411529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4411615Z outputs = self.model( 2025-08-14T22:00:53.4411858Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4411938Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4412155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4412234Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4412484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4412581Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4412845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:00:53.4412976Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:00:53.4412980Z 2025-08-14T22:00:53.4413066Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4413158Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4413267Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4413484Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4413553Z return mod(**inputs) 2025-08-14T22:00:53.4413819Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4413900Z outputs = self.model( 2025-08-14T22:00:53.4414167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4414242Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4414482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4414564Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4414830Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:00:53.4414953Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:53.4415178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:53.4415255Z return self.act(input) 2025-08-14T22:00:53.4415261Z 2025-08-14T22:00:53.4415345Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4415429Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4415516Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4415622Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4415848Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4415917Z return mod(**inputs) 2025-08-14T22:00:53.4416183Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4416264Z outputs = self.model( 2025-08-14T22:00:53.4416551Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4416630Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4416866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4416976Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4417246Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4417349Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4417635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:00:53.4417764Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:00:53.4417796Z 2025-08-14T22:00:53.4417905Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4418123Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4418197Z return mod(**inputs) 2025-08-14T22:00:53.4418466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4418547Z outputs = self.model( 2025-08-14T22:00:53.4418817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4418894Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4419136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4419219Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4419497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4419686Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4419974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:00:53.4420075Z key_states = self.k_proj(current_states) 2025-08-14T22:00:53.4420079Z 2025-08-14T22:00:53.4420193Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4420422Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4420495Z return mod(**inputs) 2025-08-14T22:00:53.4420772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4420857Z outputs = self.model( 2025-08-14T22:00:53.4421132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4421212Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4421460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4421537Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4421787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4421888Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4422131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:00:53.4422274Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:00:53.4422277Z 2025-08-14T22:00:53.4422360Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4422463Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4422674Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4422743Z return mod(**inputs) 2025-08-14T22:00:53.4423015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4423085Z outputs = self.model( 2025-08-14T22:00:53.4423330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4423428Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4423649Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4423736Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4424034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4424134Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4424404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:00:53.4424493Z value_states = self.v_proj(current_states) 2025-08-14T22:00:53.4424497Z 2025-08-14T22:00:53.4424598Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4424803Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4424877Z return mod(**inputs) 2025-08-14T22:00:53.4425128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4425195Z outputs = self.model( 2025-08-14T22:00:53.4425437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4425518Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4425735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4425813Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4426062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4426159Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4426409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:00:53.4426505Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:00:53.4426508Z 2025-08-14T22:00:53.4426606Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4426813Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4426883Z return mod(**inputs) 2025-08-14T22:00:53.4427132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4427201Z outputs = self.model( 2025-08-14T22:00:53.4427443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4427524Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4427739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4427818Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4428072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4428168Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4428419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:00:53.4428541Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:00:53.4428546Z 2025-08-14T22:00:53.4428625Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4428711Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4428829Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4429033Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4429099Z return mod(**inputs) 2025-08-14T22:00:53.4429363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4429437Z outputs = self.model( 2025-08-14T22:00:53.4429685Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4429759Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4430001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4430080Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4430349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:00:53.4430466Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:53.4430671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:53.4430748Z return self.act(input) 2025-08-14T22:00:53.4430751Z 2025-08-14T22:00:53.4430827Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4430901Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4430982Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4431080Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4431278Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4431340Z return mod(**inputs) 2025-08-14T22:00:53.4431575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4431651Z outputs = self.model( 2025-08-14T22:00:53.4431887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4431957Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4432176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4432255Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4432499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4432596Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4432833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:00:53.4432950Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:00:53.4432955Z 2025-08-14T22:00:53.4433055Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4433253Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4433318Z return mod(**inputs) 2025-08-14T22:00:53.4433552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4433627Z outputs = self.model( 2025-08-14T22:00:53.4433863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4433933Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4434153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4434230Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4434473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4434571Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4434827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:00:53.4434916Z key_states = self.k_proj(current_states) 2025-08-14T22:00:53.4434936Z 2025-08-14T22:00:53.4435038Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4435234Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4435306Z return mod(**inputs) 2025-08-14T22:00:53.4435542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4435635Z outputs = self.model( 2025-08-14T22:00:53.4435874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4435963Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4436185Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4436260Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4436514Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4436612Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4436846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:00:53.4436983Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:00:53.4436987Z 2025-08-14T22:00:53.4437065Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4437164Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4437362Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4437428Z return mod(**inputs) 2025-08-14T22:00:53.4437676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4437741Z outputs = self.model( 2025-08-14T22:00:53.4437978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4438058Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4438274Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4438350Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4438594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4438689Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4438934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:00:53.4439021Z value_states = self.v_proj(current_states) 2025-08-14T22:00:53.4439024Z 2025-08-14T22:00:53.4439125Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4439330Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4439397Z return mod(**inputs) 2025-08-14T22:00:53.4439646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4439715Z outputs = self.model( 2025-08-14T22:00:53.4439959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4440040Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4440259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4440340Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4440621Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4440718Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4440973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:00:53.4441082Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:00:53.4441086Z 2025-08-14T22:00:53.4441184Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4441391Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4441484Z return mod(**inputs) 2025-08-14T22:00:53.4441740Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4441993Z outputs = self.model( 2025-08-14T22:00:53.4442243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4442327Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4442547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4442630Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4442888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4442987Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4443244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:00:53.4443369Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:00:53.4443374Z 2025-08-14T22:00:53.4443454Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4443542Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4443645Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4443841Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4443917Z return mod(**inputs) 2025-08-14T22:00:53.4444168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4444245Z outputs = self.model( 2025-08-14T22:00:53.4444492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4444566Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4444800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4444880Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4445134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:00:53.4445253Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:53.4445464Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:53.4445551Z return self.act(input) 2025-08-14T22:00:53.4445555Z 2025-08-14T22:00:53.4445631Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4445705Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4445786Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4445885Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4446086Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4446151Z return mod(**inputs) 2025-08-14T22:00:53.4446393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4446470Z outputs = self.model( 2025-08-14T22:00:53.4446776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4446851Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4447085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4447192Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4447444Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4447545Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4447815Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:00:53.4447936Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:00:53.4447963Z 2025-08-14T22:00:53.4448065Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4448267Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4448343Z return mod(**inputs) 2025-08-14T22:00:53.4448585Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4448666Z outputs = self.model( 2025-08-14T22:00:53.4448909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4448981Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4449208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4449286Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4449535Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4449636Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4449875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:00:53.4449964Z key_states = self.k_proj(current_states) 2025-08-14T22:00:53.4449969Z 2025-08-14T22:00:53.4450073Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4450268Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4450342Z return mod(**inputs) 2025-08-14T22:00:53.4450584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4450661Z outputs = self.model( 2025-08-14T22:00:53.4450904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4450977Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4451203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4451281Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4451530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4451636Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4451879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:00:53.4452019Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:00:53.4452023Z 2025-08-14T22:00:53.4452103Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4452204Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4452411Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4452475Z return mod(**inputs) 2025-08-14T22:00:53.4452748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4452819Z outputs = self.model( 2025-08-14T22:00:53.4453063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4453165Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4453381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4453459Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4453727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4453826Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4454096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:00:53.4454184Z value_states = self.v_proj(current_states) 2025-08-14T22:00:53.4454187Z 2025-08-14T22:00:53.4454289Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4454493Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4454561Z return mod(**inputs) 2025-08-14T22:00:53.4454812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4454877Z outputs = self.model( 2025-08-14T22:00:53.4455123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4455201Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4455417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4455495Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4455747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4455844Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4456093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:00:53.4456190Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:00:53.4456194Z 2025-08-14T22:00:53.4456294Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4456498Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4456565Z return mod(**inputs) 2025-08-14T22:00:53.4456806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4456883Z outputs = self.model( 2025-08-14T22:00:53.4457126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4457205Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4457422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4457500Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4457754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4457849Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4458098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:00:53.4458222Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:00:53.4458227Z 2025-08-14T22:00:53.4458307Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4458392Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4458517Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4458730Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4458806Z return mod(**inputs) 2025-08-14T22:00:53.4459092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4459171Z outputs = self.model( 2025-08-14T22:00:53.4459437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4459513Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4459842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4459951Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4460217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:00:53.4460355Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:53.4460584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:53.4460666Z return self.act(input) 2025-08-14T22:00:53.4460671Z 2025-08-14T22:00:53.4460755Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4460839Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4460931Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4461041Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4461266Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4461345Z return mod(**inputs) 2025-08-14T22:00:53.4461611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4461695Z outputs = self.model( 2025-08-14T22:00:53.4461960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4462038Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4462279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4462364Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4462639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4462743Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4463003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:00:53.4463132Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:00:53.4463138Z 2025-08-14T22:00:53.4463246Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4463460Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4463538Z return mod(**inputs) 2025-08-14T22:00:53.4463792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4463873Z outputs = self.model( 2025-08-14T22:00:53.4464128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4464206Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4464445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4464527Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4464782Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4464913Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4465172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:00:53.4465263Z key_states = self.k_proj(current_states) 2025-08-14T22:00:53.4465297Z 2025-08-14T22:00:53.4465406Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4465617Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4465696Z return mod(**inputs) 2025-08-14T22:00:53.4465954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4466053Z outputs = self.model( 2025-08-14T22:00:53.4466311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4466405Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4466646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4466727Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4466983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4467097Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4467355Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:00:53.4467505Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:00:53.4467509Z 2025-08-14T22:00:53.4467595Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4467702Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4467921Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4467989Z return mod(**inputs) 2025-08-14T22:00:53.4468256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4468329Z outputs = self.model( 2025-08-14T22:00:53.4468588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4468673Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4468905Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4468988Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4469257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4469359Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4469630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:00:53.4469723Z value_states = self.v_proj(current_states) 2025-08-14T22:00:53.4469727Z 2025-08-14T22:00:53.4469832Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4470050Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4470120Z return mod(**inputs) 2025-08-14T22:00:53.4470378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4470456Z outputs = self.model( 2025-08-14T22:00:53.4470714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4470799Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4471030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4471113Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4471399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4471502Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4471790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:00:53.4471889Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:00:53.4471893Z 2025-08-14T22:00:53.4472001Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4472218Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4472306Z return mod(**inputs) 2025-08-14T22:00:53.4472568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4472665Z outputs = self.model( 2025-08-14T22:00:53.4472921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4473003Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4473232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4473317Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4473580Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4473681Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4473954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:00:53.4474085Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:00:53.4474090Z 2025-08-14T22:00:53.4474175Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4474266Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4474373Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4474579Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4474656Z return mod(**inputs) 2025-08-14T22:00:53.4474914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4474991Z outputs = self.model( 2025-08-14T22:00:53.4475256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4475332Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4475571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4475653Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4475919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:00:53.4476053Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:53.4476274Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:53.4476355Z return self.act(input) 2025-08-14T22:00:53.4476359Z 2025-08-14T22:00:53.4476442Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4476522Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4476611Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4476718Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4476926Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4477004Z return mod(**inputs) 2025-08-14T22:00:53.4477268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4477345Z outputs = self.model( 2025-08-14T22:00:53.4477630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4477709Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4477951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4478056Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4478315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4478426Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4478712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:00:53.4478859Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:00:53.4478863Z 2025-08-14T22:00:53.4478973Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4479186Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4479267Z return mod(**inputs) 2025-08-14T22:00:53.4479536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4479617Z outputs = self.model( 2025-08-14T22:00:53.4479884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4479961Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4480200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4480284Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4480542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4480656Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4480914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:00:53.4481008Z key_states = self.k_proj(current_states) 2025-08-14T22:00:53.4481013Z 2025-08-14T22:00:53.4481121Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4481330Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4481405Z return mod(**inputs) 2025-08-14T22:00:53.4481675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4481754Z outputs = self.model( 2025-08-14T22:00:53.4482019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4482098Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4482335Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4482417Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4482687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4482801Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4483069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:00:53.4483217Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:00:53.4483221Z 2025-08-14T22:00:53.4483309Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4483417Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4483635Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4483703Z return mod(**inputs) 2025-08-14T22:00:53.4483982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4484066Z outputs = self.model( 2025-08-14T22:00:53.4484323Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4484425Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4484657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4484738Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4485019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4485123Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4485411Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:00:53.4485501Z value_states = self.v_proj(current_states) 2025-08-14T22:00:53.4485506Z 2025-08-14T22:00:53.4485613Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4485830Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4485900Z return mod(**inputs) 2025-08-14T22:00:53.4486158Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4486239Z outputs = self.model( 2025-08-14T22:00:53.4486497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4486581Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4486811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4486893Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4487162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4487263Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4487523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:00:53.4487631Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:00:53.4487634Z 2025-08-14T22:00:53.4487739Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4487962Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4488031Z return mod(**inputs) 2025-08-14T22:00:53.4488287Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4488368Z outputs = self.model( 2025-08-14T22:00:53.4488625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4488716Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4488934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4489011Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4489259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4489356Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4489611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:00:53.4489743Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:00:53.4489748Z 2025-08-14T22:00:53.4489828Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4489942Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4490045Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4490247Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4490321Z return mod(**inputs) 2025-08-14T22:00:53.4490587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4490655Z outputs = self.model( 2025-08-14T22:00:53.4490912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4491004Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4491231Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4491333Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4491579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:00:53.4491714Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:53.4491936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:53.4492018Z return self.act(input) 2025-08-14T22:00:53.4492022Z 2025-08-14T22:00:53.4492106Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4492187Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4492275Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4492382Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4492593Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4492671Z return mod(**inputs) 2025-08-14T22:00:53.4492937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4493015Z outputs = self.model( 2025-08-14T22:00:53.4493261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4493333Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4493559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4493640Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4493882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4493986Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4494241Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:00:53.4494368Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:00:53.4494372Z 2025-08-14T22:00:53.4494483Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4494691Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4494768Z return mod(**inputs) 2025-08-14T22:00:53.4495024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4495104Z outputs = self.model( 2025-08-14T22:00:53.4495361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4495438Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4495676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4495759Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4496014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4496144Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4496403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:00:53.4496495Z key_states = self.k_proj(current_states) 2025-08-14T22:00:53.4496516Z 2025-08-14T22:00:53.4496625Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4496837Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4496915Z return mod(**inputs) 2025-08-14T22:00:53.4497191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4497264Z outputs = self.model( 2025-08-14T22:00:53.4497532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4497664Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4497905Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4497987Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4498243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4498356Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4498614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:00:53.4498764Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:00:53.4498770Z 2025-08-14T22:00:53.4498854Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4498962Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4499182Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4499252Z return mod(**inputs) 2025-08-14T22:00:53.4499515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4499685Z outputs = self.model( 2025-08-14T22:00:53.4499966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4500054Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4500295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4500380Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4500667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4500771Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4501037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:00:53.4501137Z value_states = self.v_proj(current_states) 2025-08-14T22:00:53.4501141Z 2025-08-14T22:00:53.4501250Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4501470Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4501542Z return mod(**inputs) 2025-08-14T22:00:53.4501802Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4501884Z outputs = self.model( 2025-08-14T22:00:53.4502152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4502239Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4502473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4502585Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4502850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4502953Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4503229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:00:53.4503336Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:00:53.4503340Z 2025-08-14T22:00:53.4503446Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4503679Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4503750Z return mod(**inputs) 2025-08-14T22:00:53.4504015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4504114Z outputs = self.model( 2025-08-14T22:00:53.4504382Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4504465Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4504695Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4504779Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4505040Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4505140Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4505399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:00:53.4505540Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:00:53.4505545Z 2025-08-14T22:00:53.4505630Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4505721Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4505829Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4506037Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4506117Z return mod(**inputs) 2025-08-14T22:00:53.4506375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4506447Z outputs = self.model( 2025-08-14T22:00:53.4506712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4506791Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4507030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4507115Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4507375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:00:53.4507497Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:53.4507704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:53.4507782Z return self.act(input) 2025-08-14T22:00:53.4507785Z 2025-08-14T22:00:53.4507862Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4507939Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4508022Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4508121Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4508315Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4508389Z return mod(**inputs) 2025-08-14T22:00:53.4508629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4508716Z outputs = self.model( 2025-08-14T22:00:53.4508968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4509039Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4509276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4509354Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4509590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4509696Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4509961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:00:53.4510110Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:00:53.4510113Z 2025-08-14T22:00:53.4510217Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4510414Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4510486Z return mod(**inputs) 2025-08-14T22:00:53.4510739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4510808Z outputs = self.model( 2025-08-14T22:00:53.4511059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4511131Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4511359Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4511436Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4511677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4511784Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4512025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:00:53.4512122Z key_states = self.k_proj(current_states) 2025-08-14T22:00:53.4512127Z 2025-08-14T22:00:53.4512223Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4512412Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4512481Z return mod(**inputs) 2025-08-14T22:00:53.4512719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4512785Z outputs = self.model( 2025-08-14T22:00:53.4513029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4513101Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4513322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4513398Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4513634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4513737Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4513974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:00:53.4514116Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:00:53.4514122Z 2025-08-14T22:00:53.4514202Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4514302Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4514510Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4514606Z return mod(**inputs) 2025-08-14T22:00:53.4514842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4514917Z outputs = self.model( 2025-08-14T22:00:53.4515169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4515245Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4515460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4515535Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4515792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4515903Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4516151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:00:53.4516241Z value_states = self.v_proj(current_states) 2025-08-14T22:00:53.4516244Z 2025-08-14T22:00:53.4516342Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4516541Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4516604Z return mod(**inputs) 2025-08-14T22:00:53.4516841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4516913Z outputs = self.model( 2025-08-14T22:00:53.4517151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4517226Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4517440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4517516Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4517759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4517852Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4518089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:00:53.4518188Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:00:53.4518191Z 2025-08-14T22:00:53.4518287Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4518492Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4518555Z return mod(**inputs) 2025-08-14T22:00:53.4518790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4518865Z outputs = self.model( 2025-08-14T22:00:53.4519103Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4519172Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4519394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4519473Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4519720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4519813Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4520052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:00:53.4520186Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:00:53.4520190Z 2025-08-14T22:00:53.4520269Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4520376Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4520479Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4520674Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4520784Z return mod(**inputs) 2025-08-14T22:00:53.4521029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4521095Z outputs = self.model( 2025-08-14T22:00:53.4521347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4521432Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4521656Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4521749Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4521988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:00:53.4522111Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:53.4522315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:53.4522385Z return self.act(input) 2025-08-14T22:00:53.4522398Z 2025-08-14T22:00:53.4522476Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4522554Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4522638Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4522739Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4522936Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4523008Z return mod(**inputs) 2025-08-14T22:00:53.4523252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4523321Z outputs = self.model( 2025-08-14T22:00:53.4523571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4523643Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4523871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4523948Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4524191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4524295Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4524542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:00:53.4524663Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:00:53.4524666Z 2025-08-14T22:00:53.4524769Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4524971Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4525043Z return mod(**inputs) 2025-08-14T22:00:53.4525288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4525354Z outputs = self.model( 2025-08-14T22:00:53.4525602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4525673Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4525900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4525977Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4526221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4526346Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4526592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:00:53.4526691Z key_states = self.k_proj(current_states) 2025-08-14T22:00:53.4526702Z 2025-08-14T22:00:53.4526803Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4527001Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4527074Z return mod(**inputs) 2025-08-14T22:00:53.4527336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4527405Z outputs = self.model( 2025-08-14T22:00:53.4527662Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4527756Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4527985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4528063Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4528307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4528416Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4528659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:00:53.4528792Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:00:53.4528804Z 2025-08-14T22:00:53.4528882Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4528985Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4529191Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4529258Z return mod(**inputs) 2025-08-14T22:00:53.4529504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4529578Z outputs = self.model( 2025-08-14T22:00:53.4529825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4529902Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4530122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4530200Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4530452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4530549Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4530795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:00:53.4530889Z value_states = self.v_proj(current_states) 2025-08-14T22:00:53.4530893Z 2025-08-14T22:00:53.4530993Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4531199Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4531264Z return mod(**inputs) 2025-08-14T22:00:53.4531507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4531582Z outputs = self.model( 2025-08-14T22:00:53.4531826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4531897Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4532124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4532225Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4532472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4532569Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4532830Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:00:53.4532930Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:00:53.4532934Z 2025-08-14T22:00:53.4533034Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4533264Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4533332Z return mod(**inputs) 2025-08-14T22:00:53.4533574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4533666Z outputs = self.model( 2025-08-14T22:00:53.4533919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4533995Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4534237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4534321Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4534584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4534687Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4534946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:00:53.4535084Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:00:53.4535090Z 2025-08-14T22:00:53.4535175Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4535270Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4535378Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4535591Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4535669Z return mod(**inputs) 2025-08-14T22:00:53.4535935Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4536005Z outputs = self.model( 2025-08-14T22:00:53.4536273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4536353Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4536592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4536679Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4536944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:00:53.4537078Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:53.4537301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:53.4537374Z return self.act(input) 2025-08-14T22:00:53.4537377Z 2025-08-14T22:00:53.4537469Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4537551Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4537639Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4537748Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4537960Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4538039Z return mod(**inputs) 2025-08-14T22:00:53.4538309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4538400Z outputs = self.model( 2025-08-14T22:00:53.4538675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4538752Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4539010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4539093Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4539361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4539488Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4539828Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:00:53.4539978Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:00:53.4539991Z 2025-08-14T22:00:53.4540100Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4540311Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4540387Z return mod(**inputs) 2025-08-14T22:00:53.4540651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4540724Z outputs = self.model( 2025-08-14T22:00:53.4541000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4541077Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4541318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4541401Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4541662Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4541921Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4542198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:00:53.4542287Z key_states = self.k_proj(current_states) 2025-08-14T22:00:53.4542301Z 2025-08-14T22:00:53.4542410Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4542618Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4542697Z return mod(**inputs) 2025-08-14T22:00:53.4542968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4543039Z outputs = self.model( 2025-08-14T22:00:53.4543310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4543389Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4543630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4543711Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4543971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4544082Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4544353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:00:53.4544495Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:00:53.4544499Z 2025-08-14T22:00:53.4544591Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4544700Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4544917Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4545062Z return mod(**inputs) 2025-08-14T22:00:53.4545322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4545403Z outputs = self.model( 2025-08-14T22:00:53.4545700Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4545776Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4546016Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4546100Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4546394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4546523Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4546788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:00:53.4546888Z value_states = self.v_proj(current_states) 2025-08-14T22:00:53.4546892Z 2025-08-14T22:00:53.4547000Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4547221Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4547289Z return mod(**inputs) 2025-08-14T22:00:53.4547548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4547627Z outputs = self.model( 2025-08-14T22:00:53.4547890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4547966Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4548215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4548298Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4548563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4548664Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4548926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:00:53.4549031Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:00:53.4549035Z 2025-08-14T22:00:53.4549141Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4549359Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4549428Z return mod(**inputs) 2025-08-14T22:00:53.4549690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4549768Z outputs = self.model( 2025-08-14T22:00:53.4550030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4550107Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4550347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4550432Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4550699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4550801Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4551062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:00:53.4551204Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:00:53.4551208Z 2025-08-14T22:00:53.4551293Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4551409Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4551527Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4551738Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4551835Z return mod(**inputs) 2025-08-14T22:00:53.4552093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4552163Z outputs = self.model( 2025-08-14T22:00:53.4552430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4552525Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4552759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4552870Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4553129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:00:53.4553261Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:53.4553481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:53.4553555Z return self.act(input) 2025-08-14T22:00:53.4553558Z 2025-08-14T22:00:53.4553649Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4553732Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4553821Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4553932Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4554141Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4554218Z return mod(**inputs) 2025-08-14T22:00:53.4554480Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4554552Z outputs = self.model( 2025-08-14T22:00:53.4554818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4554894Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4555137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4555218Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4555477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4555589Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4555851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 156, in forward 2025-08-14T22:00:53.4555969Z query_states = self.q_proj(hidden_states) * self.scaling 2025-08-14T22:00:53.4555981Z 2025-08-14T22:00:53.4556091Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4556301Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4556379Z return mod(**inputs) 2025-08-14T22:00:53.4556641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4556714Z outputs = self.model( 2025-08-14T22:00:53.4556983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4557059Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4557302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4557385Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4557676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4557791Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4558058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 175, in forward 2025-08-14T22:00:53.4558171Z key_states = self.k_proj(current_states) 2025-08-14T22:00:53.4558175Z 2025-08-14T22:00:53.4558292Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4558501Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4558577Z return mod(**inputs) 2025-08-14T22:00:53.4558859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4558932Z outputs = self.model( 2025-08-14T22:00:53.4559224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4559306Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4559536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4559626Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4559882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4559992Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4560252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-08-14T22:00:53.4560390Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-08-14T22:00:53.4560394Z 2025-08-14T22:00:53.4560487Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4560603Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4560807Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4560871Z return mod(**inputs) 2025-08-14T22:00:53.4561112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4561188Z outputs = self.model( 2025-08-14T22:00:53.4561434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4561505Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4561729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4561809Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4562061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4562159Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4562404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 176, in forward 2025-08-14T22:00:53.4562498Z value_states = self.v_proj(current_states) 2025-08-14T22:00:53.4562504Z 2025-08-14T22:00:53.4562605Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4562817Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4562881Z return mod(**inputs) 2025-08-14T22:00:53.4563122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4563197Z outputs = self.model( 2025-08-14T22:00:53.4563439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4563513Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4563754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4563859Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4564134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4564236Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4564527Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-08-14T22:00:53.4564636Z attn_output = torch.bmm(attn_probs, value_states) 2025-08-14T22:00:53.4564640Z 2025-08-14T22:00:53.4564746Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4564975Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4565056Z return mod(**inputs) 2025-08-14T22:00:53.4565344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4565427Z outputs = self.model( 2025-08-14T22:00:53.4565683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4565758Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4565999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4566082Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4566352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-08-14T22:00:53.4566455Z hidden_states, self_attn_weights = self.self_attn( 2025-08-14T22:00:53.4566722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-08-14T22:00:53.4566863Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-08-14T22:00:53.4566867Z 2025-08-14T22:00:53.4566954Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4567037Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4567155Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4567364Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4567444Z return mod(**inputs) 2025-08-14T22:00:53.4567702Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-08-14T22:00:53.4567773Z outputs = self.model( 2025-08-14T22:00:53.4568038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-08-14T22:00:53.4568113Z layer_outputs = decoder_layer( 2025-08-14T22:00:53.4568346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:00:53.4568436Z return super().__call__(*args, **kwargs) 2025-08-14T22:00:53.4568694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 362, in forward 2025-08-14T22:00:53.4568824Z hidden_states = self.activation_fn(self.fc1(hidden_states)) 2025-08-14T22:00:53.4569046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:00:53.4569119Z return self.act(input) 2025-08-14T22:00:53.4569122Z 2025-08-14T22:00:53.4569222Z cudagraph partition due to non gpu ops 2025-08-14T22:00:53.4569328Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4569542Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4569612Z return mod(**inputs) 2025-08-14T22:00:53.4569869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 681, in forward 2025-08-14T22:00:53.4569961Z logits = self.lm_head(outputs[0]) 2025-08-14T22:00:53.4569965Z 2025-08-14T22:00:53.4570096Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:00:53.4570303Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:00:53.4570382Z return mod(**inputs) 2025-08-14T22:00:53.4570655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 685, in forward 2025-08-14T22:00:53.4570741Z loss = self.loss_function( 2025-08-14T22:00:53.4570996Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/loss/loss_utils.py", line 67, in ForCausalLMLoss 2025-08-14T22:00:53.4571197Z loss = fixed_cross_entropy(logits, shift_labels, num_items_in_batch, ignore_index, **kwargs) 2025-08-14T22:00:53.4571473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/loss/loss_utils.py", line 36, in fixed_cross_entropy 2025-08-14T22:00:53.4571699Z loss = nn.functional.cross_entropy(source, target, ignore_index=ignore_index, reduction=reduction) 2025-08-14T22:00:53.4571705Z 2025-08-14T22:01:06.6042047Z Compilation time (from dynamo_timed): 31.099583646 2025-08-14T22:01:06.6123229Z pass 2025-08-14T22:01:06.6123704Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:01:06.6124663Z TIMING: _recursive_pre_grad_passes:0.40382 _recursive_joint_graph_passes:0.83324 _recursive_post_grad_passes:0.29074 async_compile.wait:0.96017 code_gen:12.58545 inductor_compile:16.3051 backend_compile:26.45591 gc:0.00121 entire_frame_compile:31.09958 total_wall_time:31.09958 2025-08-14T22:01:06.6125683Z STATS: call_* op count: 921 | FakeTensorMode.__torch_dispatch__:56870 | FakeTensor.__torch_dispatch__:9090 | ProxyTorchDispatchMode.__torch_dispatch__:12392 2025-08-14T22:01:06.6126270Z Dynamo produced 1 graphs covering 921 ops with 0 graph breaks (0 unique) 2025-08-14T22:01:13.1101128Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T22:01:13.1102221Z from pkg_resources import resource_filename 2025-08-14T22:01:13.7702997Z 2025-08-14T22:01:17.3679751Z loading model: 0it [00:00, ?it/s] 2025-08-14T22:01:17.3681670Z loading model: 0it [00:03, ?it/s] 2025-08-14T22:01:17.3703557Z cpu eval XLNetLMHeadModel 2025-08-14T22:01:20.1548264Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:01:21.1491655Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:01:22.1290084Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:01:49.0297005Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0299038Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0299436Z return mod(**inputs) 2025-08-14T22:01:49.0300056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0300540Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0301023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1307, in forward 2025-08-14T22:01:49.0301771Z word_emb_k = self.word_embedding(input_ids) 2025-08-14T22:01:49.0301950Z 2025-08-14T22:01:49.0302087Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0302488Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0302867Z return mod(**inputs) 2025-08-14T22:01:49.0303604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0304059Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0304502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1334, in forward 2025-08-14T22:01:49.0305041Z pos_emb = self.relative_positional_encoding(qlen, klen, bsz=bsz) 2025-08-14T22:01:49.0305582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1157, in relative_positional_encoding 2025-08-14T22:01:49.0306098Z pos_emb = self.positional_embedding(fwd_pos_seq, inv_freq, bsz) 2025-08-14T22:01:49.0311694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1115, in positional_embedding 2025-08-14T22:01:49.0315133Z pos_emb = torch.cat([torch.sin(sinusoid_inp), torch.cos(sinusoid_inp)], dim=-1) 2025-08-14T22:01:49.0315562Z 2025-08-14T22:01:49.0321485Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0326405Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0328131Z return mod(**inputs) 2025-08-14T22:01:49.0328583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0329089Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0329535Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1334, in forward 2025-08-14T22:01:49.0330025Z pos_emb = self.relative_positional_encoding(qlen, klen, bsz=bsz) 2025-08-14T22:01:49.0330561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1157, in relative_positional_encoding 2025-08-14T22:01:49.0331099Z pos_emb = self.positional_embedding(fwd_pos_seq, inv_freq, bsz) 2025-08-14T22:01:49.0331769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1115, in positional_embedding 2025-08-14T22:01:49.0332323Z pos_emb = torch.cat([torch.sin(sinusoid_inp), torch.cos(sinusoid_inp)], dim=-1) 2025-08-14T22:01:49.0332550Z 2025-08-14T22:01:49.0332681Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0333077Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0333438Z return mod(**inputs) 2025-08-14T22:01:49.0333853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0334314Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0334786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0335223Z outputs = layer_module( 2025-08-14T22:01:49.0335637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0336053Z outputs = self.rel_attn( 2025-08-14T22:01:49.0336466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:01:49.0336931Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:01:49.0337098Z 2025-08-14T22:01:49.0337221Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0337618Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0337993Z return mod(**inputs) 2025-08-14T22:01:49.0338410Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0338851Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0339317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0340032Z outputs = layer_module( 2025-08-14T22:01:49.0340452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0340882Z outputs = self.rel_attn( 2025-08-14T22:01:49.0341325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:01:49.0341964Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:01:49.0342146Z 2025-08-14T22:01:49.0342275Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0342757Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0343133Z return mod(**inputs) 2025-08-14T22:01:49.0343546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0344030Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0344463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0344890Z outputs = layer_module( 2025-08-14T22:01:49.0345301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0345723Z outputs = self.rel_attn( 2025-08-14T22:01:49.0346123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.0346552Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.0346984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:01:49.0347496Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:01:49.0347710Z 2025-08-14T22:01:49.0347826Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0348219Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0348566Z return mod(**inputs) 2025-08-14T22:01:49.0348950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0349349Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0349746Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1334, in forward 2025-08-14T22:01:49.0350198Z pos_emb = self.relative_positional_encoding(qlen, klen, bsz=bsz) 2025-08-14T22:01:49.0350721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1157, in relative_positional_encoding 2025-08-14T22:01:49.0351247Z pos_emb = self.positional_embedding(fwd_pos_seq, inv_freq, bsz) 2025-08-14T22:01:49.0351750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1115, in positional_embedding 2025-08-14T22:01:49.0352275Z pos_emb = torch.cat([torch.sin(sinusoid_inp), torch.cos(sinusoid_inp)], dim=-1) 2025-08-14T22:01:49.0352503Z 2025-08-14T22:01:49.0352616Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0352997Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0353333Z return mod(**inputs) 2025-08-14T22:01:49.0353717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0354170Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0354589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0354995Z outputs = layer_module( 2025-08-14T22:01:49.0355413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0355825Z outputs = self.rel_attn( 2025-08-14T22:01:49.0356229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:01:49.0356727Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:01:49.0356939Z 2025-08-14T22:01:49.0357052Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0357439Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0357783Z return mod(**inputs) 2025-08-14T22:01:49.0358227Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0358665Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0359086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0359504Z outputs = layer_module( 2025-08-14T22:01:49.0359887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0360297Z outputs = self.rel_attn( 2025-08-14T22:01:49.0360687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.0361107Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.0361533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:01:49.0362043Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:01:49.0362245Z 2025-08-14T22:01:49.0362367Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0362745Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0363091Z return mod(**inputs) 2025-08-14T22:01:49.0363477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0363906Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0364328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0364737Z outputs = layer_module( 2025-08-14T22:01:49.0365122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0365541Z outputs = self.rel_attn( 2025-08-14T22:01:49.0365945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:01:49.0366400Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:01:49.0366565Z 2025-08-14T22:01:49.0366685Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0367058Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0367405Z return mod(**inputs) 2025-08-14T22:01:49.0367795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0368220Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0368654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0369073Z outputs = layer_module( 2025-08-14T22:01:49.0369463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0369870Z outputs = self.rel_attn( 2025-08-14T22:01:49.0370293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.0370708Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.0371147Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:01:49.0371792Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:01:49.0371997Z 2025-08-14T22:01:49.0372110Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0372500Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0372862Z return mod(**inputs) 2025-08-14T22:01:49.0373280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0373718Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0374188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0374611Z outputs = layer_module( 2025-08-14T22:01:49.0375011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0375474Z outputs = self.rel_attn( 2025-08-14T22:01:49.0375881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.0376346Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.0376824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.0377334Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.0377524Z 2025-08-14T22:01:49.0377641Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0378049Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0378971Z return mod(**inputs) 2025-08-14T22:01:49.0379377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0379886Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0380346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0380777Z outputs = layer_module( 2025-08-14T22:01:49.0381181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0381587Z outputs = self.rel_attn( 2025-08-14T22:01:49.0381983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.0382421Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.0382880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.0383371Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.0383563Z 2025-08-14T22:01:49.0383658Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.0383925Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0384314Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0384686Z return mod(**inputs) 2025-08-14T22:01:49.0385084Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0385515Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0385946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0386370Z outputs = layer_module( 2025-08-14T22:01:49.0386799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:01:49.0387382Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:01:49.0387979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:49.0388443Z return forward_fn(*input_tensors) 2025-08-14T22:01:49.0388868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:01:49.0389285Z output_x = self.ff(output_x) 2025-08-14T22:01:49.0389728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:01:49.0390158Z output = self.activation_function(output) 2025-08-14T22:01:49.0390563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:49.0390942Z return self.act(input) 2025-08-14T22:01:49.0391071Z 2025-08-14T22:01:49.0391163Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.0391426Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0391816Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0392175Z return mod(**inputs) 2025-08-14T22:01:49.0392566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0392986Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0393416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0393830Z outputs = layer_module( 2025-08-14T22:01:49.0394231Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0394644Z outputs = self.rel_attn( 2025-08-14T22:01:49.0395057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:01:49.0395515Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:01:49.0395682Z 2025-08-14T22:01:49.0395803Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0396191Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0396566Z return mod(**inputs) 2025-08-14T22:01:49.0396960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0397385Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0397811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0398227Z outputs = layer_module( 2025-08-14T22:01:49.0398618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0399024Z outputs = self.rel_attn( 2025-08-14T22:01:49.0399420Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:01:49.0399869Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:01:49.0400035Z 2025-08-14T22:01:49.0400148Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0400540Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0400894Z return mod(**inputs) 2025-08-14T22:01:49.0401288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0401710Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0402156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0402579Z outputs = layer_module( 2025-08-14T22:01:49.0402964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0403390Z outputs = self.rel_attn( 2025-08-14T22:01:49.0403779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.0404194Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.0404630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:01:49.0405140Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:01:49.0405368Z 2025-08-14T22:01:49.0405479Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0405885Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0406229Z return mod(**inputs) 2025-08-14T22:01:49.0406623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0407069Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0407501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0407931Z outputs = layer_module( 2025-08-14T22:01:49.0408329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0408754Z outputs = self.rel_attn( 2025-08-14T22:01:49.0409154Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:01:49.0409652Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:01:49.0409856Z 2025-08-14T22:01:49.0409979Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0410366Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0410723Z return mod(**inputs) 2025-08-14T22:01:49.0411113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0411554Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0411985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0412415Z outputs = layer_module( 2025-08-14T22:01:49.0412809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0413235Z outputs = self.rel_attn( 2025-08-14T22:01:49.0413634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.0414055Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.0414480Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:01:49.0414988Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:01:49.0415192Z 2025-08-14T22:01:49.0415305Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0415696Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0416044Z return mod(**inputs) 2025-08-14T22:01:49.0416427Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0416856Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0417308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0417729Z outputs = layer_module( 2025-08-14T22:01:49.0418115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0418570Z outputs = self.rel_attn( 2025-08-14T22:01:49.0418972Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:01:49.0419421Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:01:49.0419679Z 2025-08-14T22:01:49.0419832Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0420258Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0420641Z return mod(**inputs) 2025-08-14T22:01:49.0421033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0421464Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0421900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0422319Z outputs = layer_module( 2025-08-14T22:01:49.0422722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0423141Z outputs = self.rel_attn( 2025-08-14T22:01:49.0423545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.0423987Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.0424430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:01:49.0424937Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:01:49.0425130Z 2025-08-14T22:01:49.0425253Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0425644Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0426003Z return mod(**inputs) 2025-08-14T22:01:49.0426431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0426859Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0427291Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0427718Z outputs = layer_module( 2025-08-14T22:01:49.0428125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0428548Z outputs = self.rel_attn( 2025-08-14T22:01:49.0428953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.0429394Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.0429858Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.0430351Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.0430542Z 2025-08-14T22:01:49.0430656Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0431058Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0431408Z return mod(**inputs) 2025-08-14T22:01:49.0431823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0432248Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0432701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0433106Z outputs = layer_module( 2025-08-14T22:01:49.0433505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0433940Z outputs = self.rel_attn( 2025-08-14T22:01:49.0434327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.0434757Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.0435210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.0435708Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.0435891Z 2025-08-14T22:01:49.0436001Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.0436257Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0436641Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0436991Z return mod(**inputs) 2025-08-14T22:01:49.0437369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0437796Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0438216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0438630Z outputs = layer_module( 2025-08-14T22:01:49.0439022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:01:49.0439584Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:01:49.0440143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:49.0440569Z return forward_fn(*input_tensors) 2025-08-14T22:01:49.0440982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:01:49.0441406Z output_x = self.ff(output_x) 2025-08-14T22:01:49.0442002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:01:49.0442465Z output = self.activation_function(output) 2025-08-14T22:01:49.0442855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:49.0443245Z return self.act(input) 2025-08-14T22:01:49.0443369Z 2025-08-14T22:01:49.0443458Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.0443720Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0444113Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0444472Z return mod(**inputs) 2025-08-14T22:01:49.0444871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0445304Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0445728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0446147Z outputs = layer_module( 2025-08-14T22:01:49.0446538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0446952Z outputs = self.rel_attn( 2025-08-14T22:01:49.0447344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:01:49.0447786Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:01:49.0447961Z 2025-08-14T22:01:49.0448184Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0448576Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0448923Z return mod(**inputs) 2025-08-14T22:01:49.0449317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0449797Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0450227Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0450638Z outputs = layer_module( 2025-08-14T22:01:49.0451079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0451496Z outputs = self.rel_attn( 2025-08-14T22:01:49.0451916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:01:49.0452360Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:01:49.0452535Z 2025-08-14T22:01:49.0452647Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0453028Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0453372Z return mod(**inputs) 2025-08-14T22:01:49.0453766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0454187Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0454604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0455008Z outputs = layer_module( 2025-08-14T22:01:49.0455398Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0455810Z outputs = self.rel_attn( 2025-08-14T22:01:49.0456191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.0456609Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.0457035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:01:49.0457528Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:01:49.0457727Z 2025-08-14T22:01:49.0457838Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0458232Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0458585Z return mod(**inputs) 2025-08-14T22:01:49.0458971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0459395Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0459917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0460340Z outputs = layer_module( 2025-08-14T22:01:49.0460730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0461146Z outputs = self.rel_attn( 2025-08-14T22:01:49.0461543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:01:49.0462028Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:01:49.0462238Z 2025-08-14T22:01:49.0462351Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0462754Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0463110Z return mod(**inputs) 2025-08-14T22:01:49.0463541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0463969Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0464391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0464839Z outputs = layer_module( 2025-08-14T22:01:49.0465205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0465595Z outputs = self.rel_attn( 2025-08-14T22:01:49.0465987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.0466389Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.0466807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:01:49.0467266Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:01:49.0467453Z 2025-08-14T22:01:49.0467567Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0467921Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0468257Z return mod(**inputs) 2025-08-14T22:01:49.0468621Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0469026Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0469413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0469799Z outputs = layer_module( 2025-08-14T22:01:49.0470186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0470596Z outputs = self.rel_attn( 2025-08-14T22:01:49.0470988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:01:49.0471434Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:01:49.0471599Z 2025-08-14T22:01:49.0471718Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0472093Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0472443Z return mod(**inputs) 2025-08-14T22:01:49.0472827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0473251Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0473664Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0474076Z outputs = layer_module( 2025-08-14T22:01:49.0474465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0474862Z outputs = self.rel_attn( 2025-08-14T22:01:49.0475256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.0475669Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.0476090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:01:49.0476587Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:01:49.0476784Z 2025-08-14T22:01:49.0476901Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0477292Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0477650Z return mod(**inputs) 2025-08-14T22:01:49.0478112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0478541Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0478960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0479382Z outputs = layer_module( 2025-08-14T22:01:49.0479770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0480185Z outputs = self.rel_attn( 2025-08-14T22:01:49.0481646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.0482100Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.0482552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.0483065Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.0483249Z 2025-08-14T22:01:49.0483374Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0483764Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0484130Z return mod(**inputs) 2025-08-14T22:01:49.0484523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0484963Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0485388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0485818Z outputs = layer_module( 2025-08-14T22:01:49.0486215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0486626Z outputs = self.rel_attn( 2025-08-14T22:01:49.0487035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.0487484Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.0487930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.0488417Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.0488614Z 2025-08-14T22:01:49.0488705Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.0488969Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0489358Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0489718Z return mod(**inputs) 2025-08-14T22:01:49.0490121Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0490556Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0490993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0491415Z outputs = layer_module( 2025-08-14T22:01:49.0491815Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:01:49.0492378Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:01:49.0492951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:49.0493385Z return forward_fn(*input_tensors) 2025-08-14T22:01:49.0493807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:01:49.0494223Z output_x = self.ff(output_x) 2025-08-14T22:01:49.0494683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:01:49.0495121Z output = self.activation_function(output) 2025-08-14T22:01:49.0495503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:49.0495909Z return self.act(input) 2025-08-14T22:01:49.0496041Z 2025-08-14T22:01:49.0496135Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.0496407Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0496801Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0497193Z return mod(**inputs) 2025-08-14T22:01:49.0497603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0498052Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0498483Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0498901Z outputs = layer_module( 2025-08-14T22:01:49.0499305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0499826Z outputs = self.rel_attn( 2025-08-14T22:01:49.0500238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:01:49.0500702Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:01:49.0500872Z 2025-08-14T22:01:49.0500997Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0501391Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0501751Z return mod(**inputs) 2025-08-14T22:01:49.0502149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0502593Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0503027Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0503476Z outputs = layer_module( 2025-08-14T22:01:49.0503877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0504286Z outputs = self.rel_attn( 2025-08-14T22:01:49.0504688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:01:49.0505153Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:01:49.0505320Z 2025-08-14T22:01:49.0505436Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0505832Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0506189Z return mod(**inputs) 2025-08-14T22:01:49.0506581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0507008Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0507441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0507872Z outputs = layer_module( 2025-08-14T22:01:49.0508273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0508687Z outputs = self.rel_attn( 2025-08-14T22:01:49.0509088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.0509516Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.0509988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:01:49.0510498Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:01:49.0510711Z 2025-08-14T22:01:49.0510827Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0511238Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0511602Z return mod(**inputs) 2025-08-14T22:01:49.0511994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0512422Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0512859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0513267Z outputs = layer_module( 2025-08-14T22:01:49.0513673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0514083Z outputs = self.rel_attn( 2025-08-14T22:01:49.0514549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:01:49.0515051Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:01:49.0515263Z 2025-08-14T22:01:49.0515373Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0515758Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0516142Z return mod(**inputs) 2025-08-14T22:01:49.0516530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0516953Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0517363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0517770Z outputs = layer_module( 2025-08-14T22:01:49.0518155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0518561Z outputs = self.rel_attn( 2025-08-14T22:01:49.0518944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.0519357Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.0519781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:01:49.0520270Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:01:49.0520468Z 2025-08-14T22:01:49.0520577Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0520960Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0521303Z return mod(**inputs) 2025-08-14T22:01:49.0521676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0522098Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0522506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0522911Z outputs = layer_module( 2025-08-14T22:01:49.0523289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0523692Z outputs = self.rel_attn( 2025-08-14T22:01:49.0524080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:01:49.0524508Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:01:49.0524679Z 2025-08-14T22:01:49.0524787Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0525213Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0525561Z return mod(**inputs) 2025-08-14T22:01:49.0525936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0526378Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0526800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0527215Z outputs = layer_module( 2025-08-14T22:01:49.0527631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0528040Z outputs = self.rel_attn( 2025-08-14T22:01:49.0528432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.0528870Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.0529291Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:01:49.0529772Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:01:49.0529959Z 2025-08-14T22:01:49.0530073Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0530447Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0530791Z return mod(**inputs) 2025-08-14T22:01:49.0531174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0531587Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0532005Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0532406Z outputs = layer_module( 2025-08-14T22:01:49.0532793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0533194Z outputs = self.rel_attn( 2025-08-14T22:01:49.0533584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.0534012Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.0534456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.0534921Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.0535109Z 2025-08-14T22:01:49.0535218Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0535601Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0535940Z return mod(**inputs) 2025-08-14T22:01:49.0536329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0536749Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0537164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0537564Z outputs = layer_module( 2025-08-14T22:01:49.0537955Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0538360Z outputs = self.rel_attn( 2025-08-14T22:01:49.0538743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.0539186Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.0539738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.0540290Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.0540475Z 2025-08-14T22:01:49.0540568Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.0540843Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0541251Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0541602Z return mod(**inputs) 2025-08-14T22:01:49.0542208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0542658Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0543161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0543588Z outputs = layer_module( 2025-08-14T22:01:49.0544015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:01:49.0544590Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:01:49.0545169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:49.0545598Z return forward_fn(*input_tensors) 2025-08-14T22:01:49.0546025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:01:49.0546461Z output_x = self.ff(output_x) 2025-08-14T22:01:49.0546879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:01:49.0547308Z output = self.activation_function(output) 2025-08-14T22:01:49.0547693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:49.0548038Z return self.act(input) 2025-08-14T22:01:49.0548150Z 2025-08-14T22:01:49.0548233Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.0548476Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0548841Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0549168Z return mod(**inputs) 2025-08-14T22:01:49.0549523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0549920Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0550313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0550686Z outputs = layer_module( 2025-08-14T22:01:49.0551051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0551436Z outputs = self.rel_attn( 2025-08-14T22:01:49.0551839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:01:49.0552245Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:01:49.0552408Z 2025-08-14T22:01:49.0552514Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0552885Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0553208Z return mod(**inputs) 2025-08-14T22:01:49.0553567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0553969Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0554361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0554736Z outputs = layer_module( 2025-08-14T22:01:49.0555138Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0555531Z outputs = self.rel_attn( 2025-08-14T22:01:49.0555902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:01:49.0556337Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:01:49.0556498Z 2025-08-14T22:01:49.0556600Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0556966Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0557295Z return mod(**inputs) 2025-08-14T22:01:49.0557673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0558089Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0558492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0564716Z outputs = layer_module( 2025-08-14T22:01:49.0565238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0565653Z outputs = self.rel_attn( 2025-08-14T22:01:49.0566033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.0566415Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.0566820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:01:49.0567285Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:01:49.0567480Z 2025-08-14T22:01:49.0567641Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0568014Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0568343Z return mod(**inputs) 2025-08-14T22:01:49.0568713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0569110Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0569525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0569916Z outputs = layer_module( 2025-08-14T22:01:49.0570287Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0570668Z outputs = self.rel_attn( 2025-08-14T22:01:49.0571038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:01:49.0571499Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:01:49.0571687Z 2025-08-14T22:01:49.0571801Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0572153Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0572474Z return mod(**inputs) 2025-08-14T22:01:49.0572842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0573240Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0573642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0574027Z outputs = layer_module( 2025-08-14T22:01:49.0574432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0574846Z outputs = self.rel_attn( 2025-08-14T22:01:49.0575344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.0575750Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.0576144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:01:49.0576699Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:01:49.0576894Z 2025-08-14T22:01:49.0577013Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0577403Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0577757Z return mod(**inputs) 2025-08-14T22:01:49.0578188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0578633Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0579089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0579501Z outputs = layer_module( 2025-08-14T22:01:49.0580067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0580483Z outputs = self.rel_attn( 2025-08-14T22:01:49.0580890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:01:49.0581355Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:01:49.0581522Z 2025-08-14T22:01:49.0581640Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0582031Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0582379Z return mod(**inputs) 2025-08-14T22:01:49.0582764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0583182Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0583608Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0584010Z outputs = layer_module( 2025-08-14T22:01:49.0584397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0584795Z outputs = self.rel_attn( 2025-08-14T22:01:49.0585182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.0585606Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.0600375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:01:49.0601007Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:01:49.0601195Z 2025-08-14T22:01:49.0601315Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0601706Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0602065Z return mod(**inputs) 2025-08-14T22:01:49.0602467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0602883Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0603295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0603718Z outputs = layer_module( 2025-08-14T22:01:49.0604097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0604492Z outputs = self.rel_attn( 2025-08-14T22:01:49.0604871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.0605359Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.0605799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.0606263Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.0606483Z 2025-08-14T22:01:49.0606596Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0606966Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0607291Z return mod(**inputs) 2025-08-14T22:01:49.0607690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0608098Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0608513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0608900Z outputs = layer_module( 2025-08-14T22:01:49.0609265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0609649Z outputs = self.rel_attn( 2025-08-14T22:01:49.0610013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.0610416Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.0610837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.0611282Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.0611450Z 2025-08-14T22:01:49.0611535Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.0611783Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0612155Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0612478Z return mod(**inputs) 2025-08-14T22:01:49.0612847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0613248Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0613651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0614076Z outputs = layer_module( 2025-08-14T22:01:49.0614444Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:01:49.0614975Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:01:49.0615502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:49.0615903Z return forward_fn(*input_tensors) 2025-08-14T22:01:49.0616299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:01:49.0616689Z output_x = self.ff(output_x) 2025-08-14T22:01:49.0617063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:01:49.0617468Z output = self.activation_function(output) 2025-08-14T22:01:49.0617829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:49.0618173Z return self.act(input) 2025-08-14T22:01:49.0618285Z 2025-08-14T22:01:49.0618372Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.0618628Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0619020Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0619407Z return mod(**inputs) 2025-08-14T22:01:49.0619955Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0620404Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0620850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0621282Z outputs = layer_module( 2025-08-14T22:01:49.0621667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0622057Z outputs = self.rel_attn( 2025-08-14T22:01:49.0622456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:01:49.0622876Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:01:49.0623058Z 2025-08-14T22:01:49.0623174Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0623565Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0623909Z return mod(**inputs) 2025-08-14T22:01:49.0624302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0624728Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0625121Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0625511Z outputs = layer_module( 2025-08-14T22:01:49.0625888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0626280Z outputs = self.rel_attn( 2025-08-14T22:01:49.0626669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:01:49.0627123Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:01:49.0627280Z 2025-08-14T22:01:49.0627395Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0627771Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0628096Z return mod(**inputs) 2025-08-14T22:01:49.0628466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0628875Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0629275Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0629671Z outputs = layer_module( 2025-08-14T22:01:49.0630042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0630434Z outputs = self.rel_attn( 2025-08-14T22:01:49.0630802Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.0631204Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.0631617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:01:49.0632097Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:01:49.0632291Z 2025-08-14T22:01:49.0632400Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0632771Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0633107Z return mod(**inputs) 2025-08-14T22:01:49.0633470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0633889Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0634314Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0634697Z outputs = layer_module( 2025-08-14T22:01:49.0635057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0635464Z outputs = self.rel_attn( 2025-08-14T22:01:49.0635841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:01:49.0636290Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:01:49.0636487Z 2025-08-14T22:01:49.0636615Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0636980Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0637350Z return mod(**inputs) 2025-08-14T22:01:49.0637706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0638108Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0638508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0638894Z outputs = layer_module( 2025-08-14T22:01:49.0639251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0639643Z outputs = self.rel_attn( 2025-08-14T22:01:49.0640034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.0640437Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.0640842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:01:49.0641307Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:01:49.0641496Z 2025-08-14T22:01:49.0641610Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0642169Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0642505Z return mod(**inputs) 2025-08-14T22:01:49.0642891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0643319Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0643745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0644159Z outputs = layer_module( 2025-08-14T22:01:49.0644552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0644954Z outputs = self.rel_attn( 2025-08-14T22:01:49.0645330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:01:49.0645753Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:01:49.0645909Z 2025-08-14T22:01:49.0646021Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0646375Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0646699Z return mod(**inputs) 2025-08-14T22:01:49.0647065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0647462Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0647862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0648251Z outputs = layer_module( 2025-08-14T22:01:49.0648743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0649134Z outputs = self.rel_attn( 2025-08-14T22:01:49.0649505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.0649939Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.0650339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:01:49.0650804Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:01:49.0650996Z 2025-08-14T22:01:49.0651104Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0651533Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0651856Z return mod(**inputs) 2025-08-14T22:01:49.0652259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0652665Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0653063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0653445Z outputs = layer_module( 2025-08-14T22:01:49.0653812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0654196Z outputs = self.rel_attn( 2025-08-14T22:01:49.0654557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.0654969Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.0655390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.0655836Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.0656002Z 2025-08-14T22:01:49.0656106Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0656466Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0656808Z return mod(**inputs) 2025-08-14T22:01:49.0657198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0657614Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0658035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0658442Z outputs = layer_module( 2025-08-14T22:01:49.0658827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0659234Z outputs = self.rel_attn( 2025-08-14T22:01:49.0659693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.0660136Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.0660571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.0661056Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.0661240Z 2025-08-14T22:01:49.0661352Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.0661628Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0662002Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0662351Z return mod(**inputs) 2025-08-14T22:01:49.0662742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0663156Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0663592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0663979Z outputs = layer_module( 2025-08-14T22:01:49.0664352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:01:49.0664906Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:01:49.0665440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:49.0665844Z return forward_fn(*input_tensors) 2025-08-14T22:01:49.0666258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:01:49.0666639Z output_x = self.ff(output_x) 2025-08-14T22:01:49.0667036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:01:49.0667441Z output = self.activation_function(output) 2025-08-14T22:01:49.0667790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:49.0668132Z return self.act(input) 2025-08-14T22:01:49.0668253Z 2025-08-14T22:01:49.0668338Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.0668577Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0668931Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0669257Z return mod(**inputs) 2025-08-14T22:01:49.0669625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0670020Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0670428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0670821Z outputs = layer_module( 2025-08-14T22:01:49.0671189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0671565Z outputs = self.rel_attn( 2025-08-14T22:01:49.0671939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:01:49.0672354Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:01:49.0672509Z 2025-08-14T22:01:49.0672618Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0672981Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0673313Z return mod(**inputs) 2025-08-14T22:01:49.0673673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0674065Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0674464Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0674848Z outputs = layer_module( 2025-08-14T22:01:49.0675215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0675590Z outputs = self.rel_attn( 2025-08-14T22:01:49.0675959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:01:49.0676373Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:01:49.0676526Z 2025-08-14T22:01:49.0676629Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0676988Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0677314Z return mod(**inputs) 2025-08-14T22:01:49.0677709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0678107Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0678495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0678895Z outputs = layer_module( 2025-08-14T22:01:49.0679247Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0679632Z outputs = self.rel_attn( 2025-08-14T22:01:49.0680024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.0680418Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.0680823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:01:49.0681304Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:01:49.0681486Z 2025-08-14T22:01:49.0681597Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0681949Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0682265Z return mod(**inputs) 2025-08-14T22:01:49.0682623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0683011Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0683392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0683768Z outputs = layer_module( 2025-08-14T22:01:49.0684122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0684499Z outputs = self.rel_attn( 2025-08-14T22:01:49.0684855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:01:49.0685293Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:01:49.0685480Z 2025-08-14T22:01:49.0685590Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0685947Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0686256Z return mod(**inputs) 2025-08-14T22:01:49.0686611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0687011Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0687390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0687775Z outputs = layer_module( 2025-08-14T22:01:49.0688134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0688511Z outputs = self.rel_attn( 2025-08-14T22:01:49.0688867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.0689259Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.0689649Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:01:49.0690090Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:01:49.0690279Z 2025-08-14T22:01:49.0690382Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0690737Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0691060Z return mod(**inputs) 2025-08-14T22:01:49.0691447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0691846Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0692234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0692629Z outputs = layer_module( 2025-08-14T22:01:49.0692978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0693352Z outputs = self.rel_attn( 2025-08-14T22:01:49.0693726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:01:49.0694126Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:01:49.0694286Z 2025-08-14T22:01:49.0694407Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0694766Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0695085Z return mod(**inputs) 2025-08-14T22:01:49.0695432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0695824Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0696215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0696596Z outputs = layer_module( 2025-08-14T22:01:49.0696970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0697361Z outputs = self.rel_attn( 2025-08-14T22:01:49.0697735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.0698118Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.0698520Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:01:49.0698991Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:01:49.0699181Z 2025-08-14T22:01:49.0699300Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0699769Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0700157Z return mod(**inputs) 2025-08-14T22:01:49.0700529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0700958Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0701399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0701822Z outputs = layer_module( 2025-08-14T22:01:49.0702213Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0702620Z outputs = self.rel_attn( 2025-08-14T22:01:49.0703011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.0703418Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.0703834Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.0704284Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.0704458Z 2025-08-14T22:01:49.0704567Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0704926Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0705246Z return mod(**inputs) 2025-08-14T22:01:49.0705661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0706071Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0706475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0706890Z outputs = layer_module( 2025-08-14T22:01:49.0707259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0707645Z outputs = self.rel_attn( 2025-08-14T22:01:49.0708012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.0708443Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.0708868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.0709346Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.0709522Z 2025-08-14T22:01:49.0709609Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.0709860Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0710232Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0710565Z return mod(**inputs) 2025-08-14T22:01:49.0710939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0711351Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0711751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0712135Z outputs = layer_module( 2025-08-14T22:01:49.0712517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:01:49.0713037Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:01:49.0713554Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:49.0713948Z return forward_fn(*input_tensors) 2025-08-14T22:01:49.0714343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:01:49.0714730Z output_x = self.ff(output_x) 2025-08-14T22:01:49.0715104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:01:49.0715509Z output = self.activation_function(output) 2025-08-14T22:01:49.0715864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:49.0716209Z return self.act(input) 2025-08-14T22:01:49.0716323Z 2025-08-14T22:01:49.0716407Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.0716664Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0717022Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0717342Z return mod(**inputs) 2025-08-14T22:01:49.0717705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0718122Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0718516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0718894Z outputs = layer_module( 2025-08-14T22:01:49.0719262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0719639Z outputs = self.rel_attn( 2025-08-14T22:01:49.0720045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:01:49.0720444Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:01:49.0720603Z 2025-08-14T22:01:49.0720708Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0721084Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0721406Z return mod(**inputs) 2025-08-14T22:01:49.0721832Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0722245Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0722682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0723064Z outputs = layer_module( 2025-08-14T22:01:49.0723446Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0723836Z outputs = self.rel_attn( 2025-08-14T22:01:49.0724207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:01:49.0724623Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:01:49.0724786Z 2025-08-14T22:01:49.0724890Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0725247Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0725596Z return mod(**inputs) 2025-08-14T22:01:49.0725951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0726343Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0726732Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0727100Z outputs = layer_module( 2025-08-14T22:01:49.0727463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0727849Z outputs = self.rel_attn( 2025-08-14T22:01:49.0728212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.0728603Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.0729000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:01:49.0729462Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:01:49.0729649Z 2025-08-14T22:01:49.0729752Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0730115Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0730442Z return mod(**inputs) 2025-08-14T22:01:49.0730799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0731215Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0731619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0732003Z outputs = layer_module( 2025-08-14T22:01:49.0732364Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0732748Z outputs = self.rel_attn( 2025-08-14T22:01:49.0733118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:01:49.0733561Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:01:49.0733751Z 2025-08-14T22:01:49.0733862Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0734290Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0734636Z return mod(**inputs) 2025-08-14T22:01:49.0735003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0735420Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0735830Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0736250Z outputs = layer_module( 2025-08-14T22:01:49.0736652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0737073Z outputs = self.rel_attn( 2025-08-14T22:01:49.0737457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.0737886Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.0738299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:01:49.0738789Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:01:49.0738982Z 2025-08-14T22:01:49.0739101Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0739480Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0739990Z return mod(**inputs) 2025-08-14T22:01:49.0740384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0740812Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0741230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0741638Z outputs = layer_module( 2025-08-14T22:01:49.0742256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0742679Z outputs = self.rel_attn( 2025-08-14T22:01:49.0743063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:01:49.0743509Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:01:49.0743677Z 2025-08-14T22:01:49.0743799Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0744194Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0744545Z return mod(**inputs) 2025-08-14T22:01:49.0744906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0745318Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0745716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0746097Z outputs = layer_module( 2025-08-14T22:01:49.0746486Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0746895Z outputs = self.rel_attn( 2025-08-14T22:01:49.0747289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.0747696Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.0748126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:01:49.0748611Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:01:49.0748802Z 2025-08-14T22:01:49.0748919Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0749394Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0749757Z return mod(**inputs) 2025-08-14T22:01:49.0750158Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0750609Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0751035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0751448Z outputs = layer_module( 2025-08-14T22:01:49.0751865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0752269Z outputs = self.rel_attn( 2025-08-14T22:01:49.0752664Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.0753125Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.0753565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.0754035Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.0754223Z 2025-08-14T22:01:49.0754334Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0754718Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0755055Z return mod(**inputs) 2025-08-14T22:01:49.0755440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0755874Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0756296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0756696Z outputs = layer_module( 2025-08-14T22:01:49.0757083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0757492Z outputs = self.rel_attn( 2025-08-14T22:01:49.0757876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.0758316Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.0758743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.0759197Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.0759365Z 2025-08-14T22:01:49.0759450Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.0759693Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0760056Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0760385Z return mod(**inputs) 2025-08-14T22:01:49.0760750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0761155Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0761555Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0761935Z outputs = layer_module( 2025-08-14T22:01:49.0762304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:01:49.0762830Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:01:49.0763357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:49.0763752Z return forward_fn(*input_tensors) 2025-08-14T22:01:49.0764185Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:01:49.0764579Z output_x = self.ff(output_x) 2025-08-14T22:01:49.0764952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:01:49.0765374Z output = self.activation_function(output) 2025-08-14T22:01:49.0765733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:49.0766076Z return self.act(input) 2025-08-14T22:01:49.0766190Z 2025-08-14T22:01:49.0766274Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.0766540Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0766906Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0767257Z return mod(**inputs) 2025-08-14T22:01:49.0767618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0768025Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0768425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0768805Z outputs = layer_module( 2025-08-14T22:01:49.0769178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0769564Z outputs = self.rel_attn( 2025-08-14T22:01:49.0769936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:01:49.0770347Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:01:49.0770507Z 2025-08-14T22:01:49.0770612Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0770976Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0771298Z return mod(**inputs) 2025-08-14T22:01:49.0771666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0772073Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0772471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0772848Z outputs = layer_module( 2025-08-14T22:01:49.0773223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0773634Z outputs = self.rel_attn( 2025-08-14T22:01:49.0774019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:01:49.0774459Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:01:49.0774630Z 2025-08-14T22:01:49.0774742Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0775123Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0775459Z return mod(**inputs) 2025-08-14T22:01:49.0775851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0776272Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0776691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0777098Z outputs = layer_module( 2025-08-14T22:01:49.0777489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0777900Z outputs = self.rel_attn( 2025-08-14T22:01:49.0778310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.0778724Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.0779145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:01:49.0779743Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:01:49.0779951Z 2025-08-14T22:01:49.0780064Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0780462Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0780822Z return mod(**inputs) 2025-08-14T22:01:49.0781280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0781716Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0782177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0782601Z outputs = layer_module( 2025-08-14T22:01:49.0782988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0783406Z outputs = self.rel_attn( 2025-08-14T22:01:49.0783808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:01:49.0784296Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:01:49.0784495Z 2025-08-14T22:01:49.0784607Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0784997Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0785345Z return mod(**inputs) 2025-08-14T22:01:49.0785722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0786146Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0786576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0786978Z outputs = layer_module( 2025-08-14T22:01:49.0787355Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0787761Z outputs = self.rel_attn( 2025-08-14T22:01:49.0788167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.0788588Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.0788999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:01:49.0789491Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:01:49.0789683Z 2025-08-14T22:01:49.0789804Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0790178Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0790525Z return mod(**inputs) 2025-08-14T22:01:49.0790884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0791289Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0791696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0792093Z outputs = layer_module( 2025-08-14T22:01:49.0792455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0792830Z outputs = self.rel_attn( 2025-08-14T22:01:49.0793200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:01:49.0793639Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:01:49.0793795Z 2025-08-14T22:01:49.0793906Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0794256Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0794595Z return mod(**inputs) 2025-08-14T22:01:49.0794955Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0795353Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0795755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0796139Z outputs = layer_module( 2025-08-14T22:01:49.0796506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0796902Z outputs = self.rel_attn( 2025-08-14T22:01:49.0797277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.0797669Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.0798067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:01:49.0798513Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:01:49.0798697Z 2025-08-14T22:01:49.0798801Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0799170Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0799495Z return mod(**inputs) 2025-08-14T22:01:49.0799850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0800251Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0800648Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0801027Z outputs = layer_module( 2025-08-14T22:01:49.0801398Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0801783Z outputs = self.rel_attn( 2025-08-14T22:01:49.0802154Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.0802553Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.0802974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.0803422Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.0803591Z 2025-08-14T22:01:49.0803694Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0804058Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0804381Z return mod(**inputs) 2025-08-14T22:01:49.0804747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0805140Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0805534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0805920Z outputs = layer_module( 2025-08-14T22:01:49.0806284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0806661Z outputs = self.rel_attn( 2025-08-14T22:01:49.0807027Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.0807452Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.0807866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.0808308Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.0808502Z 2025-08-14T22:01:49.0808585Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.0808826Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0809181Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0809507Z return mod(**inputs) 2025-08-14T22:01:49.0809924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0810319Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0810743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0811127Z outputs = layer_module( 2025-08-14T22:01:49.0811492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:01:49.0812018Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:01:49.0812574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:49.0812999Z return forward_fn(*input_tensors) 2025-08-14T22:01:49.0813411Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:01:49.0813821Z output_x = self.ff(output_x) 2025-08-14T22:01:49.0814220Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:01:49.0814652Z output = self.activation_function(output) 2025-08-14T22:01:49.0815026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:49.0815388Z return self.act(input) 2025-08-14T22:01:49.0815514Z 2025-08-14T22:01:49.0815601Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.0815857Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0816232Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0816575Z return mod(**inputs) 2025-08-14T22:01:49.0816976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0817392Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0817810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0818221Z outputs = layer_module( 2025-08-14T22:01:49.0818621Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0819031Z outputs = self.rel_attn( 2025-08-14T22:01:49.0819432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:01:49.0819973Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:01:49.0820143Z 2025-08-14T22:01:49.0820265Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0820652Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0821016Z return mod(**inputs) 2025-08-14T22:01:49.0821401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0821820Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0822280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0822716Z outputs = layer_module( 2025-08-14T22:01:49.0823112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0823542Z outputs = self.rel_attn( 2025-08-14T22:01:49.0823946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:01:49.0824391Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:01:49.0824554Z 2025-08-14T22:01:49.0824683Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0825085Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0825495Z return mod(**inputs) 2025-08-14T22:01:49.0825886Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0826311Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0826738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0827142Z outputs = layer_module( 2025-08-14T22:01:49.0827528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0827937Z outputs = self.rel_attn( 2025-08-14T22:01:49.0828342Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.0828763Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.0829174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:01:49.0829660Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:01:49.0829863Z 2025-08-14T22:01:49.0829974Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0830372Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0830720Z return mod(**inputs) 2025-08-14T22:01:49.0831105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0831539Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0831949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0832365Z outputs = layer_module( 2025-08-14T22:01:49.0832759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0833162Z outputs = self.rel_attn( 2025-08-14T22:01:49.0833542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:01:49.0834012Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:01:49.0834215Z 2025-08-14T22:01:49.0834330Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0834696Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0835020Z return mod(**inputs) 2025-08-14T22:01:49.0835405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0835818Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0836202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0836589Z outputs = layer_module( 2025-08-14T22:01:49.0836992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0837412Z outputs = self.rel_attn( 2025-08-14T22:01:49.0837791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.0838225Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.0838645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:01:49.0839130Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:01:49.0839324Z 2025-08-14T22:01:49.0839436Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0839840Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0840187Z return mod(**inputs) 2025-08-14T22:01:49.0840586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0841022Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0841440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0842090Z outputs = layer_module( 2025-08-14T22:01:49.0842486Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0842901Z outputs = self.rel_attn( 2025-08-14T22:01:49.0843297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:01:49.0843739Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:01:49.0843905Z 2025-08-14T22:01:49.0844008Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0844375Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0844703Z return mod(**inputs) 2025-08-14T22:01:49.0845063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0845468Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0845866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0846251Z outputs = layer_module( 2025-08-14T22:01:49.0846611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0846998Z outputs = self.rel_attn( 2025-08-14T22:01:49.0847370Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.0847751Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.0848153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:01:49.0848610Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:01:49.0848788Z 2025-08-14T22:01:49.0848901Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0849253Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0849584Z return mod(**inputs) 2025-08-14T22:01:49.0849952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0850348Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0850749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0851138Z outputs = layer_module( 2025-08-14T22:01:49.0851512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0851963Z outputs = self.rel_attn( 2025-08-14T22:01:49.0852334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.0852739Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.0853191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.0853632Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.0853812Z 2025-08-14T22:01:49.0853917Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0854309Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0854629Z return mod(**inputs) 2025-08-14T22:01:49.0854992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0855421Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0855829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0856241Z outputs = layer_module( 2025-08-14T22:01:49.0856632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0857053Z outputs = self.rel_attn( 2025-08-14T22:01:49.0857435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.0857878Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.0858323Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.0858798Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.0858976Z 2025-08-14T22:01:49.0859065Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.0859326Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0859768Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0860121Z return mod(**inputs) 2025-08-14T22:01:49.0860504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0860930Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0861349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0861761Z outputs = layer_module( 2025-08-14T22:01:49.0862149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:01:49.0862721Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:01:49.0863286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:49.0863704Z return forward_fn(*input_tensors) 2025-08-14T22:01:49.0864134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:01:49.0864548Z output_x = self.ff(output_x) 2025-08-14T22:01:49.0864962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:01:49.0865413Z output = self.activation_function(output) 2025-08-14T22:01:49.0865805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:49.0866176Z return self.act(input) 2025-08-14T22:01:49.0866294Z 2025-08-14T22:01:49.0866381Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.0866637Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0867068Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0867429Z return mod(**inputs) 2025-08-14T22:01:49.0867807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0868253Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0868677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0869084Z outputs = layer_module( 2025-08-14T22:01:49.0869506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0869913Z outputs = self.rel_attn( 2025-08-14T22:01:49.0870374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:01:49.0870815Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:01:49.0870984Z 2025-08-14T22:01:49.0871095Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0871476Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0871829Z return mod(**inputs) 2025-08-14T22:01:49.0872223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0872649Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0873066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0873474Z outputs = layer_module( 2025-08-14T22:01:49.0873857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0874260Z outputs = self.rel_attn( 2025-08-14T22:01:49.0874648Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:01:49.0875076Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:01:49.0875246Z 2025-08-14T22:01:49.0875362Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0875720Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0876035Z return mod(**inputs) 2025-08-14T22:01:49.0876396Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0876795Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0877187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0877567Z outputs = layer_module( 2025-08-14T22:01:49.0877934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0878316Z outputs = self.rel_attn( 2025-08-14T22:01:49.0878671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.0879059Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.0879456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:01:49.0879917Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:01:49.0880100Z 2025-08-14T22:01:49.0880209Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0880572Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0880895Z return mod(**inputs) 2025-08-14T22:01:49.0881272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0881659Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0882048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0882483Z outputs = layer_module( 2025-08-14T22:01:49.0882842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0883247Z outputs = self.rel_attn( 2025-08-14T22:01:49.0883653Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:01:49.0884137Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:01:49.0884336Z 2025-08-14T22:01:49.0884464Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0884851Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0885200Z return mod(**inputs) 2025-08-14T22:01:49.0885590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0885989Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0886390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0886776Z outputs = layer_module( 2025-08-14T22:01:49.0887135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0887522Z outputs = self.rel_attn( 2025-08-14T22:01:49.0887894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.0888283Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.0888676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:01:49.0889143Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:01:49.0889328Z 2025-08-14T22:01:49.0889444Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0889799Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0890125Z return mod(**inputs) 2025-08-14T22:01:49.0890499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0890903Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0891291Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0891676Z outputs = layer_module( 2025-08-14T22:01:49.0892046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0892432Z outputs = self.rel_attn( 2025-08-14T22:01:49.0892794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:01:49.0893212Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:01:49.0893375Z 2025-08-14T22:01:49.0893493Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0893872Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0894214Z return mod(**inputs) 2025-08-14T22:01:49.0894595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0895018Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0895446Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0895853Z outputs = layer_module( 2025-08-14T22:01:49.0896239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0896660Z outputs = self.rel_attn( 2025-08-14T22:01:49.0897053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.0897471Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.0897896Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:01:49.0898409Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:01:49.0898610Z 2025-08-14T22:01:49.0898722Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0899129Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0899481Z return mod(**inputs) 2025-08-14T22:01:49.0899950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0900397Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0900840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0901250Z outputs = layer_module( 2025-08-14T22:01:49.0901637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0902043Z outputs = self.rel_attn( 2025-08-14T22:01:49.0902434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.0902861Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.0903308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.0903780Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.0903972Z 2025-08-14T22:01:49.0904091Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0904465Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0904811Z return mod(**inputs) 2025-08-14T22:01:49.0905202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0905630Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0906062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0906480Z outputs = layer_module( 2025-08-14T22:01:49.0906878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0907290Z outputs = self.rel_attn( 2025-08-14T22:01:49.0907691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.0908133Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.0908578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.0909061Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.0909256Z 2025-08-14T22:01:49.0909342Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.0909597Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0909969Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0910313Z return mod(**inputs) 2025-08-14T22:01:49.0910729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0911139Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0911527Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0911920Z outputs = layer_module( 2025-08-14T22:01:49.0912282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:01:49.0912793Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:01:49.0913328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:49.0913720Z return forward_fn(*input_tensors) 2025-08-14T22:01:49.0914120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:01:49.0914498Z output_x = self.ff(output_x) 2025-08-14T22:01:49.0914879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:01:49.0915275Z output = self.activation_function(output) 2025-08-14T22:01:49.0915625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:49.0915966Z return self.act(input) 2025-08-14T22:01:49.0916086Z 2025-08-14T22:01:49.0916168Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.0916411Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0916765Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0917088Z return mod(**inputs) 2025-08-14T22:01:49.0917447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0917834Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0918226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0918608Z outputs = layer_module( 2025-08-14T22:01:49.0918983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0919368Z outputs = self.rel_attn( 2025-08-14T22:01:49.0919744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:01:49.0920167Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:01:49.0920318Z 2025-08-14T22:01:49.0920430Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0920780Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0921107Z return mod(**inputs) 2025-08-14T22:01:49.0921478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0921877Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0922277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0922668Z outputs = layer_module( 2025-08-14T22:01:49.0923049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0923466Z outputs = self.rel_attn( 2025-08-14T22:01:49.0923858Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:01:49.0924285Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:01:49.0924445Z 2025-08-14T22:01:49.0924561Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0924943Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0925277Z return mod(**inputs) 2025-08-14T22:01:49.0925659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0926112Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0926533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0926942Z outputs = layer_module( 2025-08-14T22:01:49.0927349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0927745Z outputs = self.rel_attn( 2025-08-14T22:01:49.0928133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.0928571Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.0928994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:01:49.0929491Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:01:49.0929699Z 2025-08-14T22:01:49.0929813Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0930203Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0930543Z return mod(**inputs) 2025-08-14T22:01:49.0930935Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0931365Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0931788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0932192Z outputs = layer_module( 2025-08-14T22:01:49.0932587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0932998Z outputs = self.rel_attn( 2025-08-14T22:01:49.0933387Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:01:49.0933863Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:01:49.0934071Z 2025-08-14T22:01:49.0934184Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0934573Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0934915Z return mod(**inputs) 2025-08-14T22:01:49.0935302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0935734Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0936148Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0936567Z outputs = layer_module( 2025-08-14T22:01:49.0936959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0937374Z outputs = self.rel_attn( 2025-08-14T22:01:49.0937757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.0938185Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.0938615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:01:49.0939110Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:01:49.0939310Z 2025-08-14T22:01:49.0939424Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0939912Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0940264Z return mod(**inputs) 2025-08-14T22:01:49.0940658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0941124Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0941544Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0942207Z outputs = layer_module( 2025-08-14T22:01:49.0942657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0943064Z outputs = self.rel_attn( 2025-08-14T22:01:49.0943455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:01:49.0943929Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:01:49.0944094Z 2025-08-14T22:01:49.0944201Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0944565Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0944893Z return mod(**inputs) 2025-08-14T22:01:49.0945253Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0945661Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0946060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0946447Z outputs = layer_module( 2025-08-14T22:01:49.0946804Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0946884Z outputs = self.rel_attn( 2025-08-14T22:01:49.0947144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.0947217Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.0947494Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:01:49.0947621Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:01:49.0947625Z 2025-08-14T22:01:49.0947736Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0947938Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0948003Z return mod(**inputs) 2025-08-14T22:01:49.0948263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0948348Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0948604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0948679Z outputs = layer_module( 2025-08-14T22:01:49.0948932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0949008Z outputs = self.rel_attn( 2025-08-14T22:01:49.0949258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.0949346Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.0949629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.0949741Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.0949744Z 2025-08-14T22:01:49.0949857Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0950091Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0950158Z return mod(**inputs) 2025-08-14T22:01:49.0950421Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0950529Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0950785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0950860Z outputs = layer_module( 2025-08-14T22:01:49.0951113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0951205Z outputs = self.rel_attn( 2025-08-14T22:01:49.0951456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.0951571Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.0951857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.0951968Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.0951972Z 2025-08-14T22:01:49.0952064Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.0952168Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0952370Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0952445Z return mod(**inputs) 2025-08-14T22:01:49.0952699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0952779Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0953039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0953107Z outputs = layer_module( 2025-08-14T22:01:49.0953368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:01:49.0953577Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:01:49.0953837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:49.0953926Z return forward_fn(*input_tensors) 2025-08-14T22:01:49.0954179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:01:49.0954259Z output_x = self.ff(output_x) 2025-08-14T22:01:49.0954510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:01:49.0954596Z output = self.activation_function(output) 2025-08-14T22:01:49.0954814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:49.0954882Z return self.act(input) 2025-08-14T22:01:49.0954885Z 2025-08-14T22:01:49.0954967Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.0955078Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0955276Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0955349Z return mod(**inputs) 2025-08-14T22:01:49.0955603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0955687Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0955954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0956023Z outputs = layer_module( 2025-08-14T22:01:49.0956289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0956379Z outputs = self.rel_attn( 2025-08-14T22:01:49.0956618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:01:49.0956735Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:01:49.0956738Z 2025-08-14T22:01:49.0956837Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0957025Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0957095Z return mod(**inputs) 2025-08-14T22:01:49.0957350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0957437Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0957691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0957758Z outputs = layer_module( 2025-08-14T22:01:49.0958005Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0958071Z outputs = self.rel_attn( 2025-08-14T22:01:49.0958312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:01:49.0958417Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:01:49.0958420Z 2025-08-14T22:01:49.0958518Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0958716Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0958779Z return mod(**inputs) 2025-08-14T22:01:49.0959019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0959110Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0959349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0959414Z outputs = layer_module( 2025-08-14T22:01:49.0959661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0959727Z outputs = self.rel_attn( 2025-08-14T22:01:49.0959974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.0960044Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.0960298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:01:49.0960435Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:01:49.0960439Z 2025-08-14T22:01:49.0960538Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0960746Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0960809Z return mod(**inputs) 2025-08-14T22:01:49.0961055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0961143Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0961388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0961452Z outputs = layer_module( 2025-08-14T22:01:49.0961709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0961775Z outputs = self.rel_attn( 2025-08-14T22:01:49.0962026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:01:49.0962184Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:01:49.0962188Z 2025-08-14T22:01:49.0962287Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0962488Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0962569Z return mod(**inputs) 2025-08-14T22:01:49.0962818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0962910Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0963165Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0963237Z outputs = layer_module( 2025-08-14T22:01:49.0963474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0963554Z outputs = self.rel_attn( 2025-08-14T22:01:49.0963802Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.0963872Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.0964135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:01:49.0964260Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:01:49.0964263Z 2025-08-14T22:01:49.0964361Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0964562Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0964624Z return mod(**inputs) 2025-08-14T22:01:49.0964869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0964949Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0965191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0965262Z outputs = layer_module( 2025-08-14T22:01:49.0965497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0965562Z outputs = self.rel_attn( 2025-08-14T22:01:49.0965806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:01:49.0965901Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:01:49.0965904Z 2025-08-14T22:01:49.0966011Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0966209Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0966274Z return mod(**inputs) 2025-08-14T22:01:49.0966538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0966616Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0966867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0966932Z outputs = layer_module( 2025-08-14T22:01:49.0967173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0967244Z outputs = self.rel_attn( 2025-08-14T22:01:49.0967489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.0967560Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.0967827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:01:49.0967964Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:01:49.0967969Z 2025-08-14T22:01:49.0968076Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0968266Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0968348Z return mod(**inputs) 2025-08-14T22:01:49.0968601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0968679Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0968945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0969011Z outputs = layer_module( 2025-08-14T22:01:49.0969251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0969346Z outputs = self.rel_attn( 2025-08-14T22:01:49.0969589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.0969677Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.0969944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.0970054Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.0970058Z 2025-08-14T22:01:49.0970163Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0970357Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0970422Z return mod(**inputs) 2025-08-14T22:01:49.0970675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0970755Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0970999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0971073Z outputs = layer_module( 2025-08-14T22:01:49.0971316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0971392Z outputs = self.rel_attn( 2025-08-14T22:01:49.0971641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.0971726Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.0971997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.0972105Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.0972110Z 2025-08-14T22:01:49.0972196Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.0972294Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0972487Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0972561Z return mod(**inputs) 2025-08-14T22:01:49.0972806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0972887Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0973136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0973201Z outputs = layer_module( 2025-08-14T22:01:49.0973452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:01:49.0973660Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:01:49.0973938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:49.0974029Z return forward_fn(*input_tensors) 2025-08-14T22:01:49.0974281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:01:49.0974390Z output_x = self.ff(output_x) 2025-08-14T22:01:49.0974644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:01:49.0974730Z output = self.activation_function(output) 2025-08-14T22:01:49.0974965Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:49.0975036Z return self.act(input) 2025-08-14T22:01:49.0975040Z 2025-08-14T22:01:49.0975122Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.0975247Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0975445Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0975518Z return mod(**inputs) 2025-08-14T22:01:49.0975767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0975849Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0976106Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0976173Z outputs = layer_module( 2025-08-14T22:01:49.0976423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0976500Z outputs = self.rel_attn( 2025-08-14T22:01:49.0976749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:01:49.0976856Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:01:49.0976861Z 2025-08-14T22:01:49.0976962Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0977159Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0977233Z return mod(**inputs) 2025-08-14T22:01:49.0977484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0977574Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0977823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0977893Z outputs = layer_module( 2025-08-14T22:01:49.0978160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0978230Z outputs = self.rel_attn( 2025-08-14T22:01:49.0978479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:01:49.0978588Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:01:49.0978592Z 2025-08-14T22:01:49.0978693Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0978900Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0978964Z return mod(**inputs) 2025-08-14T22:01:49.0979214Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0979304Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0979639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0979726Z outputs = layer_module( 2025-08-14T22:01:49.0980004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0980076Z outputs = self.rel_attn( 2025-08-14T22:01:49.0980333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.0980426Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.0980704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:01:49.0980857Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:01:49.0980861Z 2025-08-14T22:01:49.0980969Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0981202Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0981280Z return mod(**inputs) 2025-08-14T22:01:49.0981558Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0981650Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0981899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0981971Z outputs = layer_module( 2025-08-14T22:01:49.0982223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0982289Z outputs = self.rel_attn( 2025-08-14T22:01:49.0982545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:01:49.0982678Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:01:49.0982681Z 2025-08-14T22:01:49.0982781Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0982988Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0983053Z return mod(**inputs) 2025-08-14T22:01:49.0983316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0983394Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0983641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0983714Z outputs = layer_module( 2025-08-14T22:01:49.0983957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0984022Z outputs = self.rel_attn( 2025-08-14T22:01:49.0984273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.0984344Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.0984621Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:01:49.0984750Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:01:49.0984753Z 2025-08-14T22:01:49.0984854Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0985063Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0985128Z return mod(**inputs) 2025-08-14T22:01:49.0985389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0985470Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0985731Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0985806Z outputs = layer_module( 2025-08-14T22:01:49.0986052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0986138Z outputs = self.rel_attn( 2025-08-14T22:01:49.0986393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:01:49.0986492Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:01:49.0986542Z 2025-08-14T22:01:49.0986651Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0986843Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0986908Z return mod(**inputs) 2025-08-14T22:01:49.0987176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0987259Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0987520Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0987601Z outputs = layer_module( 2025-08-14T22:01:49.0987846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0987922Z outputs = self.rel_attn( 2025-08-14T22:01:49.0988164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.0988236Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.0988502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:01:49.0988624Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:01:49.0988629Z 2025-08-14T22:01:49.0988738Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0988930Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0988995Z return mod(**inputs) 2025-08-14T22:01:49.0989248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0989329Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0989583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0989649Z outputs = layer_module( 2025-08-14T22:01:49.0989890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0989966Z outputs = self.rel_attn( 2025-08-14T22:01:49.0990212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.0990298Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.0990575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.0990690Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.0990695Z 2025-08-14T22:01:49.0990804Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0991001Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0991067Z return mod(**inputs) 2025-08-14T22:01:49.0991331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0991414Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0991676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0991743Z outputs = layer_module( 2025-08-14T22:01:49.0991995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0992072Z outputs = self.rel_attn( 2025-08-14T22:01:49.0992341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.0992430Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.0992725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.0992846Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.0992850Z 2025-08-14T22:01:49.0992937Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.0993036Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0993244Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0993317Z return mod(**inputs) 2025-08-14T22:01:49.0993585Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0993666Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0993920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0993989Z outputs = layer_module( 2025-08-14T22:01:49.0994245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:01:49.0994455Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:01:49.0994718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:49.0994806Z return forward_fn(*input_tensors) 2025-08-14T22:01:49.0995060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:01:49.0995144Z output_x = self.ff(output_x) 2025-08-14T22:01:49.0995397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:01:49.0995482Z output = self.activation_function(output) 2025-08-14T22:01:49.0995697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:49.0995768Z return self.act(input) 2025-08-14T22:01:49.0995772Z 2025-08-14T22:01:49.0995851Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.0995960Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0996160Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0996232Z return mod(**inputs) 2025-08-14T22:01:49.0996484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0996568Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0996825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0996892Z outputs = layer_module( 2025-08-14T22:01:49.0997142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0997219Z outputs = self.rel_attn( 2025-08-14T22:01:49.0997467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:01:49.0997571Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:01:49.0997575Z 2025-08-14T22:01:49.0997679Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.0997880Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.0997954Z return mod(**inputs) 2025-08-14T22:01:49.0998225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.0998315Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.0998563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.0999082Z outputs = layer_module( 2025-08-14T22:01:49.0999341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.0999411Z outputs = self.rel_attn( 2025-08-14T22:01:49.0999682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:01:49.0999793Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:01:49.0999797Z 2025-08-14T22:01:49.0999916Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1000125Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1000190Z return mod(**inputs) 2025-08-14T22:01:49.1000444Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1000537Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1000790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1000865Z outputs = layer_module( 2025-08-14T22:01:49.1001115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1001185Z outputs = self.rel_attn( 2025-08-14T22:01:49.1001442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.1001520Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.1001789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:01:49.1001932Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:01:49.1001938Z 2025-08-14T22:01:49.1002040Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1002245Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1002310Z return mod(**inputs) 2025-08-14T22:01:49.1002562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1002653Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1002906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1002983Z outputs = layer_module( 2025-08-14T22:01:49.1003234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1003303Z outputs = self.rel_attn( 2025-08-14T22:01:49.1003562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:01:49.1003696Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:01:49.1003700Z 2025-08-14T22:01:49.1003800Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1004004Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1004069Z return mod(**inputs) 2025-08-14T22:01:49.1004328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1004413Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1004687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1004763Z outputs = layer_module( 2025-08-14T22:01:49.1005015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1005100Z outputs = self.rel_attn( 2025-08-14T22:01:49.1005358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.1005431Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.1005706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:01:49.1005862Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:01:49.1005866Z 2025-08-14T22:01:49.1005969Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1006192Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1006259Z return mod(**inputs) 2025-08-14T22:01:49.1006517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1006598Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1006849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1006924Z outputs = layer_module( 2025-08-14T22:01:49.1007173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1007242Z outputs = self.rel_attn( 2025-08-14T22:01:49.1007501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:01:49.1007603Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:01:49.1007606Z 2025-08-14T22:01:49.1007717Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1007916Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1007981Z return mod(**inputs) 2025-08-14T22:01:49.1008243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1008327Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1008588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1008654Z outputs = layer_module( 2025-08-14T22:01:49.1008911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1008987Z outputs = self.rel_attn( 2025-08-14T22:01:49.1009243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.1009317Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.1009595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:01:49.1009720Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:01:49.1009724Z 2025-08-14T22:01:49.1009832Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1010031Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1010098Z return mod(**inputs) 2025-08-14T22:01:49.1010364Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1010445Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1010710Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1010799Z outputs = layer_module( 2025-08-14T22:01:49.1011062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1011139Z outputs = self.rel_attn( 2025-08-14T22:01:49.1011405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.1011492Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.1011773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.1011904Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.1011909Z 2025-08-14T22:01:49.1012018Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1012274Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1012339Z return mod(**inputs) 2025-08-14T22:01:49.1012597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1012678Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1012939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1013009Z outputs = layer_module( 2025-08-14T22:01:49.1013269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1013349Z outputs = self.rel_attn( 2025-08-14T22:01:49.1013622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.1013707Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.1013988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.1014099Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.1014102Z 2025-08-14T22:01:49.1014189Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.1014291Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1014487Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1014557Z return mod(**inputs) 2025-08-14T22:01:49.1014821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1014907Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1015178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1015251Z outputs = layer_module( 2025-08-14T22:01:49.1015523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:01:49.1015749Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:01:49.1016023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:49.1016117Z return forward_fn(*input_tensors) 2025-08-14T22:01:49.1016384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:01:49.1016468Z output_x = self.ff(output_x) 2025-08-14T22:01:49.1016731Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:01:49.1016824Z output = self.activation_function(output) 2025-08-14T22:01:49.1017063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:49.1017157Z return self.act(input) 2025-08-14T22:01:49.1017161Z 2025-08-14T22:01:49.1017247Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.1017365Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1017574Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1017669Z return mod(**inputs) 2025-08-14T22:01:49.1017935Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1018023Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1018313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1018387Z outputs = layer_module( 2025-08-14T22:01:49.1018680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1018754Z outputs = self.rel_attn( 2025-08-14T22:01:49.1019019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:01:49.1019129Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:01:49.1019134Z 2025-08-14T22:01:49.1019242Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1019450Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1019602Z return mod(**inputs) 2025-08-14T22:01:49.1019883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1019979Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1020243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1020317Z outputs = layer_module( 2025-08-14T22:01:49.1020592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1020666Z outputs = self.rel_attn( 2025-08-14T22:01:49.1020928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:01:49.1021054Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:01:49.1021059Z 2025-08-14T22:01:49.1021167Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1021384Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1021455Z return mod(**inputs) 2025-08-14T22:01:49.1021718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1021816Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1022088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1022168Z outputs = layer_module( 2025-08-14T22:01:49.1022429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1022504Z outputs = self.rel_attn( 2025-08-14T22:01:49.1022774Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.1022851Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.1023134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:01:49.1023281Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:01:49.1023286Z 2025-08-14T22:01:49.1023392Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1023626Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1023696Z return mod(**inputs) 2025-08-14T22:01:49.1023961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1024073Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1024338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1024417Z outputs = layer_module( 2025-08-14T22:01:49.1024699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1024771Z outputs = self.rel_attn( 2025-08-14T22:01:49.1025044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:01:49.1025201Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:01:49.1025206Z 2025-08-14T22:01:49.1025313Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1025529Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1025600Z return mod(**inputs) 2025-08-14T22:01:49.1025875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1025961Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1026228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1026309Z outputs = layer_module( 2025-08-14T22:01:49.1026574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1026653Z outputs = self.rel_attn( 2025-08-14T22:01:49.1026921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.1026996Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.1027284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:01:49.1027422Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:01:49.1027425Z 2025-08-14T22:01:49.1027539Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1027743Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1027810Z return mod(**inputs) 2025-08-14T22:01:49.1028067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1028149Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1028400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1028476Z outputs = layer_module( 2025-08-14T22:01:49.1028729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1028806Z outputs = self.rel_attn( 2025-08-14T22:01:49.1029056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:01:49.1029156Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:01:49.1029159Z 2025-08-14T22:01:49.1029267Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1029465Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1029529Z return mod(**inputs) 2025-08-14T22:01:49.1029790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1029894Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1030155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1030245Z outputs = layer_module( 2025-08-14T22:01:49.1030496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1030570Z outputs = self.rel_attn( 2025-08-14T22:01:49.1030820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.1030910Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.1031183Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:01:49.1031322Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:01:49.1031326Z 2025-08-14T22:01:49.1031437Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1031634Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1031699Z return mod(**inputs) 2025-08-14T22:01:49.1031959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1032040Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1032348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1032416Z outputs = layer_module( 2025-08-14T22:01:49.1032669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1032747Z outputs = self.rel_attn( 2025-08-14T22:01:49.1033003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.1033090Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.1033369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.1033484Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.1033488Z 2025-08-14T22:01:49.1033598Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1033796Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1033861Z return mod(**inputs) 2025-08-14T22:01:49.1034119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1034200Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1034459Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1034525Z outputs = layer_module( 2025-08-14T22:01:49.1034776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1034855Z outputs = self.rel_attn( 2025-08-14T22:01:49.1035105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.1035190Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.1035468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.1035579Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.1035583Z 2025-08-14T22:01:49.1035671Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.1035773Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1035989Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1036068Z return mod(**inputs) 2025-08-14T22:01:49.1036321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1036424Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1036678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1036744Z outputs = layer_module( 2025-08-14T22:01:49.1037022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:01:49.1037229Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:01:49.1037506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:49.1037594Z return forward_fn(*input_tensors) 2025-08-14T22:01:49.1037848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:01:49.1037929Z output_x = self.ff(output_x) 2025-08-14T22:01:49.1038179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:01:49.1038266Z output = self.activation_function(output) 2025-08-14T22:01:49.1038485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:49.1038555Z return self.act(input) 2025-08-14T22:01:49.1038560Z 2025-08-14T22:01:49.1038650Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.1038751Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1038949Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1039024Z return mod(**inputs) 2025-08-14T22:01:49.1039275Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1039355Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1039614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1039681Z outputs = layer_module( 2025-08-14T22:01:49.1039938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1040007Z outputs = self.rel_attn( 2025-08-14T22:01:49.1040256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:01:49.1040362Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:01:49.1040366Z 2025-08-14T22:01:49.1040468Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1040666Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1040739Z return mod(**inputs) 2025-08-14T22:01:49.1040988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1041077Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1041326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1041392Z outputs = layer_module( 2025-08-14T22:01:49.1041647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1041715Z outputs = self.rel_attn( 2025-08-14T22:01:49.1042115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:01:49.1042289Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:01:49.1042293Z 2025-08-14T22:01:49.1042397Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1042603Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1042697Z return mod(**inputs) 2025-08-14T22:01:49.1042949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1043040Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1043319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1043395Z outputs = layer_module( 2025-08-14T22:01:49.1043644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1043740Z outputs = self.rel_attn( 2025-08-14T22:01:49.1044001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.1044075Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.1044338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:01:49.1044480Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:01:49.1044484Z 2025-08-14T22:01:49.1044583Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1044786Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1044851Z return mod(**inputs) 2025-08-14T22:01:49.1045102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1045194Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1045444Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1045517Z outputs = layer_module( 2025-08-14T22:01:49.1045763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1045833Z outputs = self.rel_attn( 2025-08-14T22:01:49.1046089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:01:49.1046220Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:01:49.1046224Z 2025-08-14T22:01:49.1046332Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1046531Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1046598Z return mod(**inputs) 2025-08-14T22:01:49.1046857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1046939Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1047189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1047264Z outputs = layer_module( 2025-08-14T22:01:49.1047512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1047589Z outputs = self.rel_attn( 2025-08-14T22:01:49.1047851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.1047922Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.1048192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:01:49.1048337Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:01:49.1048341Z 2025-08-14T22:01:49.1048441Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1048649Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1048731Z return mod(**inputs) 2025-08-14T22:01:49.1048990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1049072Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1049334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1049412Z outputs = layer_module( 2025-08-14T22:01:49.1049662Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1049752Z outputs = self.rel_attn( 2025-08-14T22:01:49.1050003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:01:49.1050102Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:01:49.1050106Z 2025-08-14T22:01:49.1050216Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1050421Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1050485Z return mod(**inputs) 2025-08-14T22:01:49.1050737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1050819Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1051079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1051147Z outputs = layer_module( 2025-08-14T22:01:49.1051399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1051477Z outputs = self.rel_attn( 2025-08-14T22:01:49.1051729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.1051810Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.1052074Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:01:49.1052196Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:01:49.1052200Z 2025-08-14T22:01:49.1052310Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1052509Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1052576Z return mod(**inputs) 2025-08-14T22:01:49.1052836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1052917Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1053176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1053244Z outputs = layer_module( 2025-08-14T22:01:49.1053495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1053572Z outputs = self.rel_attn( 2025-08-14T22:01:49.1053825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.1053914Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.1054196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.1054309Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.1054341Z 2025-08-14T22:01:49.1054451Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1054649Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1054731Z return mod(**inputs) 2025-08-14T22:01:49.1054995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1055077Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1055339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1055423Z outputs = layer_module( 2025-08-14T22:01:49.1055675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1055765Z outputs = self.rel_attn( 2025-08-14T22:01:49.1056021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.1056111Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.1056393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.1056506Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.1056510Z 2025-08-14T22:01:49.1056597Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.1056698Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1056903Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1056976Z return mod(**inputs) 2025-08-14T22:01:49.1057236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1057324Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1057604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1057673Z outputs = layer_module( 2025-08-14T22:01:49.1057948Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:01:49.1058173Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:01:49.1058452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:49.1058543Z return forward_fn(*input_tensors) 2025-08-14T22:01:49.1058827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:01:49.1058912Z output_x = self.ff(output_x) 2025-08-14T22:01:49.1059194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:01:49.1059286Z output = self.activation_function(output) 2025-08-14T22:01:49.1059582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:49.1059665Z return self.act(input) 2025-08-14T22:01:49.1059669Z 2025-08-14T22:01:49.1059763Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.1059872Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1060087Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1060164Z return mod(**inputs) 2025-08-14T22:01:49.1060440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1060530Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1060840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1060909Z outputs = layer_module( 2025-08-14T22:01:49.1061166Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1061256Z outputs = self.rel_attn( 2025-08-14T22:01:49.1061506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:01:49.1061612Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:01:49.1061616Z 2025-08-14T22:01:49.1061720Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1061938Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1062014Z return mod(**inputs) 2025-08-14T22:01:49.1062283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1062375Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1062626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1062694Z outputs = layer_module( 2025-08-14T22:01:49.1062960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1063030Z outputs = self.rel_attn( 2025-08-14T22:01:49.1063284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:01:49.1063385Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:01:49.1063388Z 2025-08-14T22:01:49.1063491Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1063704Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1063770Z return mod(**inputs) 2025-08-14T22:01:49.1064030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1064121Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1064375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1064451Z outputs = layer_module( 2025-08-14T22:01:49.1064700Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1064769Z outputs = self.rel_attn( 2025-08-14T22:01:49.1065029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.1065101Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.1065381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:01:49.1065510Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:01:49.1065514Z 2025-08-14T22:01:49.1065613Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1065817Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1065884Z return mod(**inputs) 2025-08-14T22:01:49.1066134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1066225Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1066476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1066551Z outputs = layer_module( 2025-08-14T22:01:49.1066803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1066889Z outputs = self.rel_attn( 2025-08-14T22:01:49.1067146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:01:49.1067277Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:01:49.1067298Z 2025-08-14T22:01:49.1067409Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1067604Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1067668Z return mod(**inputs) 2025-08-14T22:01:49.1067940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1068022Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1068271Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1068366Z outputs = layer_module( 2025-08-14T22:01:49.1068613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1068688Z outputs = self.rel_attn( 2025-08-14T22:01:49.1068933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.1069006Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.1069278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:01:49.1069406Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:01:49.1069411Z 2025-08-14T22:01:49.1069520Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1069716Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1069784Z return mod(**inputs) 2025-08-14T22:01:49.1070045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1070126Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1070374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1070450Z outputs = layer_module( 2025-08-14T22:01:49.1070696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1070770Z outputs = self.rel_attn( 2025-08-14T22:01:49.1071019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:01:49.1071117Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:01:49.1071122Z 2025-08-14T22:01:49.1071231Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1071429Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1071494Z return mod(**inputs) 2025-08-14T22:01:49.1071751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1071835Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1072093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1072160Z outputs = layer_module( 2025-08-14T22:01:49.1072409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1072485Z outputs = self.rel_attn( 2025-08-14T22:01:49.1072733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.1072813Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.1073101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:01:49.1073225Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:01:49.1073254Z 2025-08-14T22:01:49.1073364Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1073559Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1073625Z return mod(**inputs) 2025-08-14T22:01:49.1073882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1073978Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1074236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1074330Z outputs = layer_module( 2025-08-14T22:01:49.1074581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1074658Z outputs = self.rel_attn( 2025-08-14T22:01:49.1074908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.1075005Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.1075274Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.1075384Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.1075388Z 2025-08-14T22:01:49.1075498Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1075695Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1075761Z return mod(**inputs) 2025-08-14T22:01:49.1076018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1076099Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1076353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1076421Z outputs = layer_module( 2025-08-14T22:01:49.1076671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1076758Z outputs = self.rel_attn( 2025-08-14T22:01:49.1077002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.1077086Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.1077357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.1077468Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.1077472Z 2025-08-14T22:01:49.1077559Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.1077656Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1077845Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1077918Z return mod(**inputs) 2025-08-14T22:01:49.1078162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1078249Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1078493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1078559Z outputs = layer_module( 2025-08-14T22:01:49.1078810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:01:49.1079029Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:01:49.1079290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:49.1079392Z return forward_fn(*input_tensors) 2025-08-14T22:01:49.1079639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:01:49.1079719Z output_x = self.ff(output_x) 2025-08-14T22:01:49.1079985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:01:49.1080073Z output = self.activation_function(output) 2025-08-14T22:01:49.1080293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:49.1080380Z return self.act(input) 2025-08-14T22:01:49.1080384Z 2025-08-14T22:01:49.1080473Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.1080576Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1080774Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1080849Z return mod(**inputs) 2025-08-14T22:01:49.1081098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1081180Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1081451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1081517Z outputs = layer_module( 2025-08-14T22:01:49.1081780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1081849Z outputs = self.rel_attn( 2025-08-14T22:01:49.1082093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:01:49.1082195Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:01:49.1082198Z 2025-08-14T22:01:49.1082296Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1082494Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1082558Z return mod(**inputs) 2025-08-14T22:01:49.1082803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1082890Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1083134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1083201Z outputs = layer_module( 2025-08-14T22:01:49.1083461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1083528Z outputs = self.rel_attn( 2025-08-14T22:01:49.1083777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:01:49.1083875Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:01:49.1083878Z 2025-08-14T22:01:49.1083977Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1084176Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1084238Z return mod(**inputs) 2025-08-14T22:01:49.1084486Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1084574Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1084871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1084946Z outputs = layer_module( 2025-08-14T22:01:49.1085188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1085274Z outputs = self.rel_attn( 2025-08-14T22:01:49.1085526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.1085597Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.1085865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:01:49.1086011Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:01:49.1086014Z 2025-08-14T22:01:49.1086112Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1086329Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1086395Z return mod(**inputs) 2025-08-14T22:01:49.1086640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1086727Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1086972Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1087044Z outputs = layer_module( 2025-08-14T22:01:49.1087285Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1087350Z outputs = self.rel_attn( 2025-08-14T22:01:49.1087600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:01:49.1087730Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:01:49.1087733Z 2025-08-14T22:01:49.1087842Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1088039Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1088105Z return mod(**inputs) 2025-08-14T22:01:49.1088365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1088447Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1088696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1088768Z outputs = layer_module( 2025-08-14T22:01:49.1089019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1089093Z outputs = self.rel_attn( 2025-08-14T22:01:49.1089346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.1089419Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.1089697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:01:49.1089823Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:01:49.1089826Z 2025-08-14T22:01:49.1089931Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1090122Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1090186Z return mod(**inputs) 2025-08-14T22:01:49.1090439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1090517Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1090764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1090856Z outputs = layer_module( 2025-08-14T22:01:49.1091112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1091187Z outputs = self.rel_attn( 2025-08-14T22:01:49.1091453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:01:49.1091551Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:01:49.1091555Z 2025-08-14T22:01:49.1091663Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1091876Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1091944Z return mod(**inputs) 2025-08-14T22:01:49.1092208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1092308Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1092580Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1092647Z outputs = layer_module( 2025-08-14T22:01:49.1092890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1092966Z outputs = self.rel_attn( 2025-08-14T22:01:49.1093212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.1093290Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.1093551Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:01:49.1093669Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:01:49.1093675Z 2025-08-14T22:01:49.1093782Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1093976Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1094041Z return mod(**inputs) 2025-08-14T22:01:49.1094299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1094380Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1094636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1094701Z outputs = layer_module( 2025-08-14T22:01:49.1094946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1095019Z outputs = self.rel_attn( 2025-08-14T22:01:49.1095267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.1095359Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.1095626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.1095735Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.1095740Z 2025-08-14T22:01:49.1095848Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1096039Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1096103Z return mod(**inputs) 2025-08-14T22:01:49.1096362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1096442Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1096698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1096764Z outputs = layer_module( 2025-08-14T22:01:49.1097033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1097108Z outputs = self.rel_attn( 2025-08-14T22:01:49.1097354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.1097461Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.1097729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.1097841Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.1097869Z 2025-08-14T22:01:49.1097958Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.1098057Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1098262Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1098334Z return mod(**inputs) 2025-08-14T22:01:49.1098577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1098663Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1098906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1098970Z outputs = layer_module( 2025-08-14T22:01:49.1099272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:01:49.1099492Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:01:49.1099863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:49.1099956Z return forward_fn(*input_tensors) 2025-08-14T22:01:49.1100235Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:01:49.1100325Z output_x = self.ff(output_x) 2025-08-14T22:01:49.1100600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:01:49.1100692Z output = self.activation_function(output) 2025-08-14T22:01:49.1100923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:49.1100989Z return self.act(input) 2025-08-14T22:01:49.1100993Z 2025-08-14T22:01:49.1101084Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.1101185Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1101379Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1101454Z return mod(**inputs) 2025-08-14T22:01:49.1101700Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1101780Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1102032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1102098Z outputs = layer_module( 2025-08-14T22:01:49.1102347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1102414Z outputs = self.rel_attn( 2025-08-14T22:01:49.1102657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:01:49.1102762Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:01:49.1102767Z 2025-08-14T22:01:49.1102866Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1103090Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1103155Z return mod(**inputs) 2025-08-14T22:01:49.1103409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1103512Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1103756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1103821Z outputs = layer_module( 2025-08-14T22:01:49.1104084Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1104152Z outputs = self.rel_attn( 2025-08-14T22:01:49.1104410Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:01:49.1104524Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:01:49.1104530Z 2025-08-14T22:01:49.1104629Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1104833Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1104899Z return mod(**inputs) 2025-08-14T22:01:49.1105147Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1105234Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1105480Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1105554Z outputs = layer_module( 2025-08-14T22:01:49.1105801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1105870Z outputs = self.rel_attn( 2025-08-14T22:01:49.1106125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.1106196Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.1106472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:01:49.1106603Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:01:49.1106607Z 2025-08-14T22:01:49.1106706Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1106906Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1106971Z return mod(**inputs) 2025-08-14T22:01:49.1107222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1107310Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1107558Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1107632Z outputs = layer_module( 2025-08-14T22:01:49.1107882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1107951Z outputs = self.rel_attn( 2025-08-14T22:01:49.1108206Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:01:49.1108335Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:01:49.1108338Z 2025-08-14T22:01:49.1108445Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1108643Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1108709Z return mod(**inputs) 2025-08-14T22:01:49.1108964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1109065Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1109308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1109396Z outputs = layer_module( 2025-08-14T22:01:49.1109649Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1109721Z outputs = self.rel_attn( 2025-08-14T22:01:49.1109962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.1110046Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.1110310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:01:49.1110451Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:01:49.1110454Z 2025-08-14T22:01:49.1110561Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1110752Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1110825Z return mod(**inputs) 2025-08-14T22:01:49.1111074Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1111151Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1111388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1111461Z outputs = layer_module( 2025-08-14T22:01:49.1111707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1111782Z outputs = self.rel_attn( 2025-08-14T22:01:49.1112021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:01:49.1112115Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:01:49.1112118Z 2025-08-14T22:01:49.1112221Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1112410Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1112479Z return mod(**inputs) 2025-08-14T22:01:49.1112717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1112792Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1113040Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1113104Z outputs = layer_module( 2025-08-14T22:01:49.1113353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1113429Z outputs = self.rel_attn( 2025-08-14T22:01:49.1113672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.1113749Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.1114010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:01:49.1114130Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:01:49.1114134Z 2025-08-14T22:01:49.1114239Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1114435Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1114499Z return mod(**inputs) 2025-08-14T22:01:49.1114753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1114851Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1115109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1115173Z outputs = layer_module( 2025-08-14T22:01:49.1115425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1115496Z outputs = self.rel_attn( 2025-08-14T22:01:49.1115729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.1115821Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.1116092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.1116213Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.1116216Z 2025-08-14T22:01:49.1116323Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1116509Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1116570Z return mod(**inputs) 2025-08-14T22:01:49.1116820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1116899Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1117146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1117209Z outputs = layer_module( 2025-08-14T22:01:49.1117445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1117518Z outputs = self.rel_attn( 2025-08-14T22:01:49.1117758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.1117847Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.1118104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.1118210Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.1118213Z 2025-08-14T22:01:49.1118296Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.1118392Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1118581Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1118650Z return mod(**inputs) 2025-08-14T22:01:49.1118890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1118976Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1119216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1119280Z outputs = layer_module( 2025-08-14T22:01:49.1119524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:01:49.1119723Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:01:49.1119978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:49.1120054Z return forward_fn(*input_tensors) 2025-08-14T22:01:49.1120296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:01:49.1120372Z output_x = self.ff(output_x) 2025-08-14T22:01:49.1120609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:01:49.1120710Z output = self.activation_function(output) 2025-08-14T22:01:49.1120922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:49.1120988Z return self.act(input) 2025-08-14T22:01:49.1121016Z 2025-08-14T22:01:49.1121102Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.1121201Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1121390Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1121462Z return mod(**inputs) 2025-08-14T22:01:49.1121723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1121802Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1122064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1122130Z outputs = layer_module( 2025-08-14T22:01:49.1122376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1122442Z outputs = self.rel_attn( 2025-08-14T22:01:49.1122681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:01:49.1122783Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:01:49.1122786Z 2025-08-14T22:01:49.1122883Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1123081Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1123144Z return mod(**inputs) 2025-08-14T22:01:49.1123394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1123479Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1123721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1123785Z outputs = layer_module( 2025-08-14T22:01:49.1124033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1124100Z outputs = self.rel_attn( 2025-08-14T22:01:49.1124344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:01:49.1124439Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:01:49.1124442Z 2025-08-14T22:01:49.1124541Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1124738Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1124801Z return mod(**inputs) 2025-08-14T22:01:49.1125050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1125127Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1125367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1125440Z outputs = layer_module( 2025-08-14T22:01:49.1125678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1125742Z outputs = self.rel_attn( 2025-08-14T22:01:49.1125989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.1126058Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.1126319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:01:49.1126465Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:01:49.1126468Z 2025-08-14T22:01:49.1126564Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1126759Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1126837Z return mod(**inputs) 2025-08-14T22:01:49.1127079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1127164Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1127418Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1127491Z outputs = layer_module( 2025-08-14T22:01:49.1127728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1127806Z outputs = self.rel_attn( 2025-08-14T22:01:49.1128055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:01:49.1128181Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:01:49.1128185Z 2025-08-14T22:01:49.1128289Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1128478Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1128540Z return mod(**inputs) 2025-08-14T22:01:49.1128790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1128869Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1129111Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1129185Z outputs = layer_module( 2025-08-14T22:01:49.1129423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1129494Z outputs = self.rel_attn( 2025-08-14T22:01:49.1129731Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.1129799Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.1130059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:01:49.1130178Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:01:49.1130181Z 2025-08-14T22:01:49.1130286Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1130472Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1130535Z return mod(**inputs) 2025-08-14T22:01:49.1130783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1130860Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1131097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1131168Z outputs = layer_module( 2025-08-14T22:01:49.1131407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1131477Z outputs = self.rel_attn( 2025-08-14T22:01:49.1131715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:01:49.1131810Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:01:49.1131814Z 2025-08-14T22:01:49.1131920Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1132111Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1132201Z return mod(**inputs) 2025-08-14T22:01:49.1132444Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1132523Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1132787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1132852Z outputs = layer_module( 2025-08-14T22:01:49.1133104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1133190Z outputs = self.rel_attn( 2025-08-14T22:01:49.1133436Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.1133529Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.1133788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:01:49.1133907Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:01:49.1133911Z 2025-08-14T22:01:49.1134016Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1134211Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1134282Z return mod(**inputs) 2025-08-14T22:01:49.1134526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1134603Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1134856Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1134923Z outputs = layer_module( 2025-08-14T22:01:49.1135168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1135243Z outputs = self.rel_attn( 2025-08-14T22:01:49.1135484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.1135578Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.1135843Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.1135952Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.1135956Z 2025-08-14T22:01:49.1136062Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1136254Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1136326Z return mod(**inputs) 2025-08-14T22:01:49.1136571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1136653Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1136903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1136970Z outputs = layer_module( 2025-08-14T22:01:49.1137214Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1137290Z outputs = self.rel_attn( 2025-08-14T22:01:49.1137534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.1137630Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.1137892Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.1138004Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.1138008Z 2025-08-14T22:01:49.1138133Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.1138236Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1138432Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1138522Z return mod(**inputs) 2025-08-14T22:01:49.1138774Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1138862Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1139130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1139196Z outputs = layer_module( 2025-08-14T22:01:49.1139451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:01:49.1139764Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:01:49.1140038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:49.1140120Z return forward_fn(*input_tensors) 2025-08-14T22:01:49.1140374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:01:49.1140459Z output_x = self.ff(output_x) 2025-08-14T22:01:49.1140710Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:01:49.1140800Z output = self.activation_function(output) 2025-08-14T22:01:49.1141023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:49.1141095Z return self.act(input) 2025-08-14T22:01:49.1141099Z 2025-08-14T22:01:49.1141191Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.1141294Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1141492Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1141566Z return mod(**inputs) 2025-08-14T22:01:49.1142003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1142093Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1142364Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1142429Z outputs = layer_module( 2025-08-14T22:01:49.1142683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1142753Z outputs = self.rel_attn( 2025-08-14T22:01:49.1142999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:01:49.1143104Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:01:49.1143107Z 2025-08-14T22:01:49.1143208Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1143414Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1143477Z return mod(**inputs) 2025-08-14T22:01:49.1143730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1143818Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1144069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1144134Z outputs = layer_module( 2025-08-14T22:01:49.1144392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1144538Z outputs = self.rel_attn( 2025-08-14T22:01:49.1144789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:01:49.1144907Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:01:49.1144938Z 2025-08-14T22:01:49.1145038Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1145241Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1145306Z return mod(**inputs) 2025-08-14T22:01:49.1145578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1145659Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1145903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1146003Z outputs = layer_module( 2025-08-14T22:01:49.1146251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1146318Z outputs = self.rel_attn( 2025-08-14T22:01:49.1146573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.1146645Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.1146916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:01:49.1147047Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:01:49.1147052Z 2025-08-14T22:01:49.1147151Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1147354Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1147419Z return mod(**inputs) 2025-08-14T22:01:49.1147675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1147760Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1148027Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1148104Z outputs = layer_module( 2025-08-14T22:01:49.1148371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1148442Z outputs = self.rel_attn( 2025-08-14T22:01:49.1148721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:01:49.1148863Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:01:49.1148868Z 2025-08-14T22:01:49.1148985Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1149200Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1149272Z return mod(**inputs) 2025-08-14T22:01:49.1149559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1149650Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1149941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1150011Z outputs = layer_module( 2025-08-14T22:01:49.1150285Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1150364Z outputs = self.rel_attn( 2025-08-14T22:01:49.1150631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.1150710Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.1151031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:01:49.1151169Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:01:49.1151188Z 2025-08-14T22:01:49.1151305Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1151514Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1151582Z return mod(**inputs) 2025-08-14T22:01:49.1151871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1151958Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1152222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1152316Z outputs = layer_module( 2025-08-14T22:01:49.1152597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1152679Z outputs = self.rel_attn( 2025-08-14T22:01:49.1152961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:01:49.1153071Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:01:49.1153075Z 2025-08-14T22:01:49.1153193Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1153414Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1153495Z return mod(**inputs) 2025-08-14T22:01:49.1153788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1153878Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1154170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1154242Z outputs = layer_module( 2025-08-14T22:01:49.1154526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1154610Z outputs = self.rel_attn( 2025-08-14T22:01:49.1154893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.1154979Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.1155282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:01:49.1155415Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:01:49.1155422Z 2025-08-14T22:01:49.1155543Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1155770Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1155848Z return mod(**inputs) 2025-08-14T22:01:49.1156122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1156209Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1156491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1156564Z outputs = layer_module( 2025-08-14T22:01:49.1156838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1156920Z outputs = self.rel_attn( 2025-08-14T22:01:49.1157194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.1157298Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.1157615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.1157734Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.1157739Z 2025-08-14T22:01:49.1157871Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1158081Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1158157Z return mod(**inputs) 2025-08-14T22:01:49.1158437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1158543Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1158823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1158914Z outputs = layer_module( 2025-08-14T22:01:49.1159190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1159271Z outputs = self.rel_attn( 2025-08-14T22:01:49.1159535Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.1159636Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.1159921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.1160037Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.1160041Z 2025-08-14T22:01:49.1160136Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.1160242Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1160449Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1160529Z return mod(**inputs) 2025-08-14T22:01:49.1160795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1160888Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1161153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1161224Z outputs = layer_module( 2025-08-14T22:01:49.1161492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:01:49.1161713Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:01:49.1161998Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:49.1162083Z return forward_fn(*input_tensors) 2025-08-14T22:01:49.1162349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:01:49.1162435Z output_x = self.ff(output_x) 2025-08-14T22:01:49.1162696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:01:49.1162787Z output = self.activation_function(output) 2025-08-14T22:01:49.1163014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:49.1163087Z return self.act(input) 2025-08-14T22:01:49.1163090Z 2025-08-14T22:01:49.1163180Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.1163290Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1163505Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1163585Z return mod(**inputs) 2025-08-14T22:01:49.1163882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1163981Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1164256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1164345Z outputs = layer_module( 2025-08-14T22:01:49.1164621Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1164705Z outputs = self.rel_attn( 2025-08-14T22:01:49.1164965Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:01:49.1165090Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:01:49.1165094Z 2025-08-14T22:01:49.1165202Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1165451Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1165521Z return mod(**inputs) 2025-08-14T22:01:49.1165787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1165883Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1166155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1166223Z outputs = layer_module( 2025-08-14T22:01:49.1166481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1166549Z outputs = self.rel_attn( 2025-08-14T22:01:49.1166806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:01:49.1166906Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:01:49.1166909Z 2025-08-14T22:01:49.1167013Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1167218Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1167285Z return mod(**inputs) 2025-08-14T22:01:49.1167543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1167626Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1167885Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1167963Z outputs = layer_module( 2025-08-14T22:01:49.1168230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1168303Z outputs = self.rel_attn( 2025-08-14T22:01:49.1168576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.1168654Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.1168945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:01:49.1169083Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:01:49.1169088Z 2025-08-14T22:01:49.1169195Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1169412Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1169481Z return mod(**inputs) 2025-08-14T22:01:49.1169770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1169850Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1170105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1170193Z outputs = layer_module( 2025-08-14T22:01:49.1170443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1170511Z outputs = self.rel_attn( 2025-08-14T22:01:49.1170787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:01:49.1170919Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:01:49.1170922Z 2025-08-14T22:01:49.1171032Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1171245Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1171312Z return mod(**inputs) 2025-08-14T22:01:49.1171572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1171675Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1171948Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1172019Z outputs = layer_module( 2025-08-14T22:01:49.1172281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1172361Z outputs = self.rel_attn( 2025-08-14T22:01:49.1172622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.1172698Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.1173008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:01:49.1173145Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:01:49.1173151Z 2025-08-14T22:01:49.1173268Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1173503Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1173574Z return mod(**inputs) 2025-08-14T22:01:49.1173854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1173942Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1174222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1174295Z outputs = layer_module( 2025-08-14T22:01:49.1174579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1174660Z outputs = self.rel_attn( 2025-08-14T22:01:49.1174949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:01:49.1175058Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:01:49.1175062Z 2025-08-14T22:01:49.1175182Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1175406Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1175484Z return mod(**inputs) 2025-08-14T22:01:49.1175748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1175833Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1176105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1176175Z outputs = layer_module( 2025-08-14T22:01:49.1176447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1176528Z outputs = self.rel_attn( 2025-08-14T22:01:49.1176822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.1176912Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.1177204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:01:49.1177356Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:01:49.1177360Z 2025-08-14T22:01:49.1177476Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1177700Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1177794Z return mod(**inputs) 2025-08-14T22:01:49.1178069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1178205Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1178482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1178554Z outputs = layer_module( 2025-08-14T22:01:49.1178823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1178904Z outputs = self.rel_attn( 2025-08-14T22:01:49.1179183Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.1179284Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.1179666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.1179795Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.1179802Z 2025-08-14T22:01:49.1179923Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1180139Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1180218Z return mod(**inputs) 2025-08-14T22:01:49.1180489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1180580Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1180865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1180938Z outputs = layer_module( 2025-08-14T22:01:49.1181211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1181293Z outputs = self.rel_attn( 2025-08-14T22:01:49.1181562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.1181669Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.1181962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.1182082Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.1182087Z 2025-08-14T22:01:49.1182188Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.1182298Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1182523Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1182594Z return mod(**inputs) 2025-08-14T22:01:49.1182869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1182969Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1183242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1183340Z outputs = layer_module( 2025-08-14T22:01:49.1183621Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:01:49.1183847Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:01:49.1184156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:49.1184242Z return forward_fn(*input_tensors) 2025-08-14T22:01:49.1184535Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:01:49.1184625Z output_x = self.ff(output_x) 2025-08-14T22:01:49.1184898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:01:49.1185017Z output = self.activation_function(output) 2025-08-14T22:01:49.1185249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:49.1185326Z return self.act(input) 2025-08-14T22:01:49.1185329Z 2025-08-14T22:01:49.1185425Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.1185537Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1185754Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1185832Z return mod(**inputs) 2025-08-14T22:01:49.1186106Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1186205Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1186480Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1186555Z outputs = layer_module( 2025-08-14T22:01:49.1186834Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1186908Z outputs = self.rel_attn( 2025-08-14T22:01:49.1187179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-08-14T22:01:49.1187294Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-08-14T22:01:49.1187298Z 2025-08-14T22:01:49.1187407Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1187630Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1187700Z return mod(**inputs) 2025-08-14T22:01:49.1187977Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1188075Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1188352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1188428Z outputs = layer_module( 2025-08-14T22:01:49.1188676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1188744Z outputs = self.rel_attn( 2025-08-14T22:01:49.1188999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-08-14T22:01:49.1189098Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-08-14T22:01:49.1189101Z 2025-08-14T22:01:49.1189203Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1189410Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1189475Z return mod(**inputs) 2025-08-14T22:01:49.1189736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1189837Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1190086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1190177Z outputs = layer_module( 2025-08-14T22:01:49.1190430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1190499Z outputs = self.rel_attn( 2025-08-14T22:01:49.1190755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.1190851Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.1191129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-08-14T22:01:49.1191281Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-08-14T22:01:49.1191284Z 2025-08-14T22:01:49.1191388Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1191594Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1191661Z return mod(**inputs) 2025-08-14T22:01:49.1191924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1192005Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1192256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1192329Z outputs = layer_module( 2025-08-14T22:01:49.1192580Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1192650Z outputs = self.rel_attn( 2025-08-14T22:01:49.1192910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-08-14T22:01:49.1193041Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-08-14T22:01:49.1193044Z 2025-08-14T22:01:49.1193154Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1193352Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1193417Z return mod(**inputs) 2025-08-14T22:01:49.1193674Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1193753Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1194010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1194079Z outputs = layer_module( 2025-08-14T22:01:49.1194328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1194406Z outputs = self.rel_attn( 2025-08-14T22:01:49.1194655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.1194727Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.1195003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-08-14T22:01:49.1195131Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-08-14T22:01:49.1195135Z 2025-08-14T22:01:49.1195243Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1195440Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1195506Z return mod(**inputs) 2025-08-14T22:01:49.1195764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1195865Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1196123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1196189Z outputs = layer_module( 2025-08-14T22:01:49.1196453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1196530Z outputs = self.rel_attn( 2025-08-14T22:01:49.1196778Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-08-14T22:01:49.1196894Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-08-14T22:01:49.1196897Z 2025-08-14T22:01:49.1197009Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1197223Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1197295Z return mod(**inputs) 2025-08-14T22:01:49.1197547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1197630Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1197890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1197957Z outputs = layer_module( 2025-08-14T22:01:49.1198212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1198280Z outputs = self.rel_attn( 2025-08-14T22:01:49.1198530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-08-14T22:01:49.1198608Z attn_vec = self.rel_attn_core( 2025-08-14T22:01:49.1198881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-08-14T22:01:49.1199006Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-08-14T22:01:49.1199010Z 2025-08-14T22:01:49.1199122Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1199320Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1199395Z return mod(**inputs) 2025-08-14T22:01:49.1199647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1199728Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1199987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1200054Z outputs = layer_module( 2025-08-14T22:01:49.1200307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1200384Z outputs = self.rel_attn( 2025-08-14T22:01:49.1200643Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.1200739Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.1201010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.1201123Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.1201127Z 2025-08-14T22:01:49.1201237Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1201435Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1201509Z return mod(**inputs) 2025-08-14T22:01:49.1201757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1201840Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1202115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1202182Z outputs = layer_module( 2025-08-14T22:01:49.1202446Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-08-14T22:01:49.1202538Z outputs = self.rel_attn( 2025-08-14T22:01:49.1202789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-08-14T22:01:49.1202887Z output_h = self.post_attention(h, attn_vec) 2025-08-14T22:01:49.1203186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-08-14T22:01:49.1203311Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-08-14T22:01:49.1203315Z 2025-08-14T22:01:49.1203404Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.1203507Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1203711Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1203776Z return mod(**inputs) 2025-08-14T22:01:49.1204031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-08-14T22:01:49.1204119Z transformer_outputs = self.transformer( 2025-08-14T22:01:49.1204371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-08-14T22:01:49.1204439Z outputs = layer_module( 2025-08-14T22:01:49.1204700Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 512, in forward 2025-08-14T22:01:49.1204908Z output_h = apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, output_h) 2025-08-14T22:01:49.1205178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:01:49.1205257Z return forward_fn(*input_tensors) 2025-08-14T22:01:49.1205511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 518, in ff_chunk 2025-08-14T22:01:49.1205596Z output_x = self.ff(output_x) 2025-08-14T22:01:49.1205846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 464, in forward 2025-08-14T22:01:49.1205939Z output = self.activation_function(output) 2025-08-14T22:01:49.1206154Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:01:49.1206223Z return self.act(input) 2025-08-14T22:01:49.1206227Z 2025-08-14T22:01:49.1206390Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.1206469Z cudagraph partition due to non gpu ops 2025-08-14T22:01:49.1206575Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:01:49.1206782Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:01:49.1206847Z return mod(**inputs) 2025-08-14T22:01:49.1207111Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1630, in forward 2025-08-14T22:01:49.1207244Z loss = loss_fct(logits.view(-1, logits.size(-1)), labels.view(-1)) 2025-08-14T22:01:49.1207248Z 2025-08-14T22:02:01.3597003Z Compilation time (from dynamo_timed): 37.196639642 2025-08-14T22:02:01.3710488Z pass 2025-08-14T22:02:01.3716035Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:02:01.3721666Z TIMING: _recursive_pre_grad_passes:0.07634 _recursive_joint_graph_passes:1.39287 _recursive_post_grad_passes:0.21757 async_compile.wait:0.52368 code_gen:11.08468 inductor_compile:15.88405 backend_compile:30.72574 gc:0.00041 entire_frame_compile:37.19664 total_wall_time:37.19664 2025-08-14T22:02:01.3723191Z STATS: call_* op count: 818 | FakeTensorMode.__torch_dispatch__:91970 | FakeTensor.__torch_dispatch__:14519 | ProxyTorchDispatchMode.__torch_dispatch__:18687 2025-08-14T22:02:01.3724113Z Dynamo produced 1 graphs covering 818 ops with 0 graph breaks (0 unique) 2025-08-14T22:02:08.0226536Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/llvmlite/binding/ffi.py:175: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-14T22:02:08.0227929Z from pkg_resources import resource_filename 2025-08-14T22:02:08.6071083Z 2025-08-14T22:02:10.0825550Z loading model: 0it [00:00, ?it/s] 2025-08-14T22:02:10.0827197Z loading model: 0it [00:01, ?it/s] 2025-08-14T22:02:10.0845081Z cpu eval YituTechConvBert 2025-08-14T22:02:11.0330870Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:02:11.3198951Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:02:11.6187920Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:02:28.1416076Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1416660Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1417045Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1417904Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1418225Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1418484Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1418715Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1418996Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1421562Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1422097Z return mod(**inputs) 2025-08-14T22:02:28.1422714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1423468Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1423954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1424442Z hidden_states = self.encoder( 2025-08-14T22:02:28.1424885Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1425332Z layer_outputs = layer_module( 2025-08-14T22:02:28.1425714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1426123Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1426567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1427025Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1427482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1427926Z self_outputs = self.self( 2025-08-14T22:02:28.1428345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:02:28.1428921Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:02:28.1429449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-08-14T22:02:28.1429900Z x = self.depthwise(hidden_states) 2025-08-14T22:02:28.1430048Z 2025-08-14T22:02:28.1430178Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1430905Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1431276Z return mod(**inputs) 2025-08-14T22:02:28.1431695Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1432216Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1432680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1433146Z hidden_states = self.encoder( 2025-08-14T22:02:28.1433669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1434105Z layer_outputs = layer_module( 2025-08-14T22:02:28.1434529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1434941Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1435394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1435857Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1436323Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1436789Z self_outputs = self.self( 2025-08-14T22:02:28.1437216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:02:28.1437761Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:02:28.1438290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 283, in forward 2025-08-14T22:02:28.1438745Z x = self.pointwise(x) 2025-08-14T22:02:28.1438868Z 2025-08-14T22:02:28.1438960Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1439226Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1439630Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1440001Z return mod(**inputs) 2025-08-14T22:02:28.1440436Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1440902Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1441359Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1442056Z hidden_states = self.encoder( 2025-08-14T22:02:28.1442503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1442946Z layer_outputs = layer_module( 2025-08-14T22:02:28.1443327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1443725Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1444175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1444613Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1445062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1445505Z self_outputs = self.self( 2025-08-14T22:02:28.1445934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 362, in forward 2025-08-14T22:02:28.1446440Z conv_kernel_layer = self.conv_kernel_layer(conv_attn_layer) 2025-08-14T22:02:28.1446634Z 2025-08-14T22:02:28.1446726Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1447015Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1447285Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1447691Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1448089Z return mod(**inputs) 2025-08-14T22:02:28.1448515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1448981Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1449464Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1449918Z hidden_states = self.encoder( 2025-08-14T22:02:28.1450355Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1450815Z layer_outputs = layer_module( 2025-08-14T22:02:28.1451207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1451610Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1452060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1452513Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1452971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1453424Z self_outputs = self.self( 2025-08-14T22:02:28.1453865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 380, in forward 2025-08-14T22:02:28.1454370Z conv_out_layer = torch.matmul(conv_out_layer, conv_kernel_layer) 2025-08-14T22:02:28.1454577Z 2025-08-14T22:02:28.1454666Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1454931Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1455329Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1455695Z return mod(**inputs) 2025-08-14T22:02:28.1456114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1456583Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1457066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1457521Z hidden_states = self.encoder( 2025-08-14T22:02:28.1457978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1458419Z layer_outputs = layer_module( 2025-08-14T22:02:28.1458800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1459204Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1459796Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1460286Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1460744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1461192Z self_outputs = self.self( 2025-08-14T22:02:28.1461624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 405, in forward 2025-08-14T22:02:28.1462114Z context_layer = torch.cat([context_layer, conv_out], 2) 2025-08-14T22:02:28.1462302Z 2025-08-14T22:02:28.1462393Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1462629Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1462916Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1463317Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1463688Z return mod(**inputs) 2025-08-14T22:02:28.1464114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1464552Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1465248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1465695Z hidden_states = self.encoder( 2025-08-14T22:02:28.1466121Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1466574Z layer_outputs = layer_module( 2025-08-14T22:02:28.1466949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1467339Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1467766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 586, in forward 2025-08-14T22:02:28.1468198Z layer_output = apply_chunking_to_forward( 2025-08-14T22:02:28.1468633Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:02:28.1469061Z return forward_fn(*input_tensors) 2025-08-14T22:02:28.1469530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 593, in feed_forward_chunk 2025-08-14T22:02:28.1470051Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:02:28.1470532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 514, in forward 2025-08-14T22:02:28.1470999Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:02:28.1471399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:02:28.1471765Z return self.act(input) 2025-08-14T22:02:28.1471896Z 2025-08-14T22:02:28.1471983Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1472209Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1472426Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1472648Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1472871Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1473087Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1473309Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1473531Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1473777Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1474170Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1474517Z return mod(**inputs) 2025-08-14T22:02:28.1474919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1475352Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1475797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1476224Z hidden_states = self.encoder( 2025-08-14T22:02:28.1476634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1477065Z layer_outputs = layer_module( 2025-08-14T22:02:28.1477444Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1477834Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1478283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1478724Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1479160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1479608Z self_outputs = self.self( 2025-08-14T22:02:28.1480022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:02:28.1480528Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:02:28.1481071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-08-14T22:02:28.1481528Z x = self.depthwise(hidden_states) 2025-08-14T22:02:28.1481674Z 2025-08-14T22:02:28.1481781Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1482148Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1482475Z return mod(**inputs) 2025-08-14T22:02:28.1482852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1483270Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1483679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1484085Z hidden_states = self.encoder( 2025-08-14T22:02:28.1484493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1484915Z layer_outputs = layer_module( 2025-08-14T22:02:28.1485287Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1485669Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1486082Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1486520Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1486961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1487386Z self_outputs = self.self( 2025-08-14T22:02:28.1487801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:02:28.1488320Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:02:28.1488841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 283, in forward 2025-08-14T22:02:28.1489265Z x = self.pointwise(x) 2025-08-14T22:02:28.1489394Z 2025-08-14T22:02:28.1489479Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1489737Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1490125Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1490480Z return mod(**inputs) 2025-08-14T22:02:28.1490884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1491340Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1491786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1492219Z hidden_states = self.encoder( 2025-08-14T22:02:28.1492641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1493063Z layer_outputs = layer_module( 2025-08-14T22:02:28.1493454Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1493849Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1494290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1495384Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1495842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1496283Z self_outputs = self.self( 2025-08-14T22:02:28.1496735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 362, in forward 2025-08-14T22:02:28.1497227Z conv_kernel_layer = self.conv_kernel_layer(conv_attn_layer) 2025-08-14T22:02:28.1497427Z 2025-08-14T22:02:28.1497516Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1497752Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1498003Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1498397Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1498749Z return mod(**inputs) 2025-08-14T22:02:28.1499164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1499702Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1500164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1500628Z hidden_states = self.encoder( 2025-08-14T22:02:28.1501044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1501478Z layer_outputs = layer_module( 2025-08-14T22:02:28.1501856Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1502253Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1502679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1503121Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1503552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1503977Z self_outputs = self.self( 2025-08-14T22:02:28.1504385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 380, in forward 2025-08-14T22:02:28.1504867Z conv_out_layer = torch.matmul(conv_out_layer, conv_kernel_layer) 2025-08-14T22:02:28.1505065Z 2025-08-14T22:02:28.1505156Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1505404Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1505790Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1506136Z return mod(**inputs) 2025-08-14T22:02:28.1506546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1506976Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1507420Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1507817Z hidden_states = self.encoder( 2025-08-14T22:02:28.1508201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1508597Z layer_outputs = layer_module( 2025-08-14T22:02:28.1508966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1509321Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1509708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1510132Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1510527Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1510916Z self_outputs = self.self( 2025-08-14T22:02:28.1511298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 405, in forward 2025-08-14T22:02:28.1511730Z context_layer = torch.cat([context_layer, conv_out], 2) 2025-08-14T22:02:28.1511909Z 2025-08-14T22:02:28.1511998Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1512199Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1512436Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1512794Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1513114Z return mod(**inputs) 2025-08-14T22:02:28.1513472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1513870Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1514265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1514652Z hidden_states = self.encoder( 2025-08-14T22:02:28.1515040Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1515440Z layer_outputs = layer_module( 2025-08-14T22:02:28.1515778Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1516125Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1516523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 586, in forward 2025-08-14T22:02:28.1516930Z layer_output = apply_chunking_to_forward( 2025-08-14T22:02:28.1517319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:02:28.1517709Z return forward_fn(*input_tensors) 2025-08-14T22:02:28.1518133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 593, in feed_forward_chunk 2025-08-14T22:02:28.1518605Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:02:28.1519039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 514, in forward 2025-08-14T22:02:28.1519474Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:02:28.1519850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:02:28.1520183Z return self.act(input) 2025-08-14T22:02:28.1520295Z 2025-08-14T22:02:28.1520373Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1520586Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1520798Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1520998Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1521214Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1521418Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1521612Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1521815Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1522051Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1522431Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1522748Z return mod(**inputs) 2025-08-14T22:02:28.1523119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1523546Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1523938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1524331Z hidden_states = self.encoder( 2025-08-14T22:02:28.1524786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1525188Z layer_outputs = layer_module( 2025-08-14T22:02:28.1525529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1525906Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1526302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1526708Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1527106Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1527505Z self_outputs = self.self( 2025-08-14T22:02:28.1527890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:02:28.1528372Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:02:28.1528854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-08-14T22:02:28.1529252Z x = self.depthwise(hidden_states) 2025-08-14T22:02:28.1529378Z 2025-08-14T22:02:28.1529489Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1529838Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1530156Z return mod(**inputs) 2025-08-14T22:02:28.1530523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1530916Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1531313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1531699Z hidden_states = self.encoder( 2025-08-14T22:02:28.1532086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1532467Z layer_outputs = layer_module( 2025-08-14T22:02:28.1532810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1533173Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1533566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1533966Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1534369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1534772Z self_outputs = self.self( 2025-08-14T22:02:28.1535147Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:02:28.1535630Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:02:28.1536113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 283, in forward 2025-08-14T22:02:28.1536511Z x = self.pointwise(x) 2025-08-14T22:02:28.1536646Z 2025-08-14T22:02:28.1536730Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1536971Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1537339Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1537694Z return mod(**inputs) 2025-08-14T22:02:28.1538062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1538472Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1538904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1539302Z hidden_states = self.encoder( 2025-08-14T22:02:28.1539817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1540294Z layer_outputs = layer_module( 2025-08-14T22:02:28.1540668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1541047Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1541457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1542113Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1542529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1542947Z self_outputs = self.self( 2025-08-14T22:02:28.1543345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 362, in forward 2025-08-14T22:02:28.1543796Z conv_kernel_layer = self.conv_kernel_layer(conv_attn_layer) 2025-08-14T22:02:28.1543973Z 2025-08-14T22:02:28.1544056Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1544274Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1544515Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1544877Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1545209Z return mod(**inputs) 2025-08-14T22:02:28.1545588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1546002Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1546406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1546809Z hidden_states = self.encoder( 2025-08-14T22:02:28.1547207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1547609Z layer_outputs = layer_module( 2025-08-14T22:02:28.1547960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1548323Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1548732Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1549143Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1549540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1549929Z self_outputs = self.self( 2025-08-14T22:02:28.1550306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 380, in forward 2025-08-14T22:02:28.1550744Z conv_out_layer = torch.matmul(conv_out_layer, conv_kernel_layer) 2025-08-14T22:02:28.1550927Z 2025-08-14T22:02:28.1551007Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1551336Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1551688Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1552007Z return mod(**inputs) 2025-08-14T22:02:28.1552414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1552824Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1553217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1553643Z hidden_states = self.encoder( 2025-08-14T22:02:28.1554041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1554456Z layer_outputs = layer_module( 2025-08-14T22:02:28.1554793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1555145Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1555540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1555938Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1556334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1556721Z self_outputs = self.self( 2025-08-14T22:02:28.1557102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 405, in forward 2025-08-14T22:02:28.1557526Z context_layer = torch.cat([context_layer, conv_out], 2) 2025-08-14T22:02:28.1557696Z 2025-08-14T22:02:28.1557775Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1557983Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1558209Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1558566Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1558886Z return mod(**inputs) 2025-08-14T22:02:28.1559260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1559654Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1560055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1560457Z hidden_states = self.encoder( 2025-08-14T22:02:28.1560900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1561290Z layer_outputs = layer_module( 2025-08-14T22:02:28.1561631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1561987Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1562372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 586, in forward 2025-08-14T22:02:28.1562779Z layer_output = apply_chunking_to_forward( 2025-08-14T22:02:28.1563186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:02:28.1563573Z return forward_fn(*input_tensors) 2025-08-14T22:02:28.1563991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 593, in feed_forward_chunk 2025-08-14T22:02:28.1564475Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:02:28.1564920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 514, in forward 2025-08-14T22:02:28.1565372Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:02:28.1565743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:02:28.1566075Z return self.act(input) 2025-08-14T22:02:28.1566205Z 2025-08-14T22:02:28.1566291Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1566490Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1566693Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1566894Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1567090Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1567314Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1567519Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1567719Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1567961Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1568320Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1568645Z return mod(**inputs) 2025-08-14T22:02:28.1569007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1569414Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1569815Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1570206Z hidden_states = self.encoder( 2025-08-14T22:02:28.1570584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1570971Z layer_outputs = layer_module( 2025-08-14T22:02:28.1571310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1571657Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1572053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1572457Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1572853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1573235Z self_outputs = self.self( 2025-08-14T22:02:28.1573608Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:02:28.1574085Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:02:28.1574561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-08-14T22:02:28.1574948Z x = self.depthwise(hidden_states) 2025-08-14T22:02:28.1575085Z 2025-08-14T22:02:28.1575190Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1575544Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1575863Z return mod(**inputs) 2025-08-14T22:02:28.1576243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1576652Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1577058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1577451Z hidden_states = self.encoder( 2025-08-14T22:02:28.1577846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1578246Z layer_outputs = layer_module( 2025-08-14T22:02:28.1578590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1578985Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1579400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1579929Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1580344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1580788Z self_outputs = self.self( 2025-08-14T22:02:28.1581223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:02:28.1581784Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:02:28.1582267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 283, in forward 2025-08-14T22:02:28.1582706Z x = self.pointwise(x) 2025-08-14T22:02:28.1582821Z 2025-08-14T22:02:28.1582913Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1583147Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1583515Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1583849Z return mod(**inputs) 2025-08-14T22:02:28.1584229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1584635Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1585052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1585462Z hidden_states = self.encoder( 2025-08-14T22:02:28.1585862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1586258Z layer_outputs = layer_module( 2025-08-14T22:02:28.1586607Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1586970Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1587371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1587785Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1588195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1588595Z self_outputs = self.self( 2025-08-14T22:02:28.1588974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 362, in forward 2025-08-14T22:02:28.1589424Z conv_kernel_layer = self.conv_kernel_layer(conv_attn_layer) 2025-08-14T22:02:28.1589599Z 2025-08-14T22:02:28.1589691Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1589901Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1590144Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1590518Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1590840Z return mod(**inputs) 2025-08-14T22:02:28.1591203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1591602Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1592003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1592387Z hidden_states = self.encoder( 2025-08-14T22:02:28.1592773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1593168Z layer_outputs = layer_module( 2025-08-14T22:02:28.1593542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1593892Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1594288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1594709Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1595102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1595488Z self_outputs = self.self( 2025-08-14T22:02:28.1595880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 380, in forward 2025-08-14T22:02:28.1596364Z conv_out_layer = torch.matmul(conv_out_layer, conv_kernel_layer) 2025-08-14T22:02:28.1596540Z 2025-08-14T22:02:28.1596623Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1596862Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1597228Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1597550Z return mod(**inputs) 2025-08-14T22:02:28.1597919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1598339Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1598740Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1599134Z hidden_states = self.encoder( 2025-08-14T22:02:28.1599519Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1599909Z layer_outputs = layer_module( 2025-08-14T22:02:28.1600247Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1600597Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1600991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1601393Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1601789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1602173Z self_outputs = self.self( 2025-08-14T22:02:28.1602555Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 405, in forward 2025-08-14T22:02:28.1602988Z context_layer = torch.cat([context_layer, conv_out], 2) 2025-08-14T22:02:28.1603152Z 2025-08-14T22:02:28.1603233Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1603443Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1603677Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1604036Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1604351Z return mod(**inputs) 2025-08-14T22:02:28.1604726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1605129Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1605523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1605919Z hidden_states = self.encoder( 2025-08-14T22:02:28.1606306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1606696Z layer_outputs = layer_module( 2025-08-14T22:02:28.1607060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1607412Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1607807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 586, in forward 2025-08-14T22:02:28.1608231Z layer_output = apply_chunking_to_forward( 2025-08-14T22:02:28.1608635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:02:28.1609043Z return forward_fn(*input_tensors) 2025-08-14T22:02:28.1609481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 593, in feed_forward_chunk 2025-08-14T22:02:28.1609951Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:02:28.1610411Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 514, in forward 2025-08-14T22:02:28.1610844Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:02:28.1611221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:02:28.1611549Z return self.act(input) 2025-08-14T22:02:28.1611666Z 2025-08-14T22:02:28.1611746Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1611954Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1612151Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1612357Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1612561Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1612766Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1612960Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1613161Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1613397Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1613754Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1614097Z return mod(**inputs) 2025-08-14T22:02:28.1614491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1614902Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1615317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1615717Z hidden_states = self.encoder( 2025-08-14T22:02:28.1616113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1616504Z layer_outputs = layer_module( 2025-08-14T22:02:28.1616852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1617250Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1617694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1618141Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1618582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1619022Z self_outputs = self.self( 2025-08-14T22:02:28.1619443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:02:28.1620074Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:02:28.1620663Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-08-14T22:02:28.1621109Z x = self.depthwise(hidden_states) 2025-08-14T22:02:28.1621248Z 2025-08-14T22:02:28.1621400Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1621816Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1622150Z return mod(**inputs) 2025-08-14T22:02:28.1622532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1622961Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1623374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1623778Z hidden_states = self.encoder( 2025-08-14T22:02:28.1624186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1624603Z layer_outputs = layer_module( 2025-08-14T22:02:28.1624949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1625307Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1625704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1626116Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1626528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1626922Z self_outputs = self.self( 2025-08-14T22:02:28.1627311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:02:28.1627796Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:02:28.1628277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 283, in forward 2025-08-14T22:02:28.1628666Z x = self.pointwise(x) 2025-08-14T22:02:28.1628787Z 2025-08-14T22:02:28.1628867Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1629106Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1629471Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1629793Z return mod(**inputs) 2025-08-14T22:02:28.1630173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1630588Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1630988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1631389Z hidden_states = self.encoder( 2025-08-14T22:02:28.1631783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1632183Z layer_outputs = layer_module( 2025-08-14T22:02:28.1632523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1632887Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1633302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1633710Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1634109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1634504Z self_outputs = self.self( 2025-08-14T22:02:28.1634891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 362, in forward 2025-08-14T22:02:28.1635329Z conv_kernel_layer = self.conv_kernel_layer(conv_attn_layer) 2025-08-14T22:02:28.1635512Z 2025-08-14T22:02:28.1635617Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1635838Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1636079Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1636436Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1636784Z return mod(**inputs) 2025-08-14T22:02:28.1637163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1637567Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1637996Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1638399Z hidden_states = self.encoder( 2025-08-14T22:02:28.1638789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1639198Z layer_outputs = layer_module( 2025-08-14T22:02:28.1639546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1639908Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1640302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1640709Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1641119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1641513Z self_outputs = self.self( 2025-08-14T22:02:28.1642137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 380, in forward 2025-08-14T22:02:28.1642609Z conv_out_layer = torch.matmul(conv_out_layer, conv_kernel_layer) 2025-08-14T22:02:28.1642800Z 2025-08-14T22:02:28.1642884Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1643130Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1643500Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1643855Z return mod(**inputs) 2025-08-14T22:02:28.1644265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1644702Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1645119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1645545Z hidden_states = self.encoder( 2025-08-14T22:02:28.1645973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1646411Z layer_outputs = layer_module( 2025-08-14T22:02:28.1646761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1647125Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1647525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1647939Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1648349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1648748Z self_outputs = self.self( 2025-08-14T22:02:28.1649129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 405, in forward 2025-08-14T22:02:28.1649598Z context_layer = torch.cat([context_layer, conv_out], 2) 2025-08-14T22:02:28.1649787Z 2025-08-14T22:02:28.1649872Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1650100Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1650425Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1650813Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1651171Z return mod(**inputs) 2025-08-14T22:02:28.1651592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1652028Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1652485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1652958Z hidden_states = self.encoder( 2025-08-14T22:02:28.1653372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1653843Z layer_outputs = layer_module( 2025-08-14T22:02:28.1654209Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1654585Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1655034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 586, in forward 2025-08-14T22:02:28.1655474Z layer_output = apply_chunking_to_forward( 2025-08-14T22:02:28.1655904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:02:28.1656321Z return forward_fn(*input_tensors) 2025-08-14T22:02:28.1656780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 593, in feed_forward_chunk 2025-08-14T22:02:28.1657290Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:02:28.1657782Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 514, in forward 2025-08-14T22:02:28.1658254Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:02:28.1658664Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:02:28.1659034Z return self.act(input) 2025-08-14T22:02:28.1659151Z 2025-08-14T22:02:28.1659238Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1659466Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1659751Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1659972Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1660182Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1660401Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1660618Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1660826Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1661076Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1661464Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1661803Z return mod(**inputs) 2025-08-14T22:02:28.1662207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1662653Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1663106Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1663541Z hidden_states = self.encoder( 2025-08-14T22:02:28.1663962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1664403Z layer_outputs = layer_module( 2025-08-14T22:02:28.1664766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1665154Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1665618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1666029Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1666428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1666881Z self_outputs = self.self( 2025-08-14T22:02:28.1667270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:02:28.1667779Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:02:28.1668258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-08-14T22:02:28.1668687Z x = self.depthwise(hidden_states) 2025-08-14T22:02:28.1668820Z 2025-08-14T22:02:28.1668952Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1669310Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1669638Z return mod(**inputs) 2025-08-14T22:02:28.1670020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1670436Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1670836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1671243Z hidden_states = self.encoder( 2025-08-14T22:02:28.1671639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1672042Z layer_outputs = layer_module( 2025-08-14T22:02:28.1672381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1672745Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1673147Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1673551Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1673958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1674368Z self_outputs = self.self( 2025-08-14T22:02:28.1674756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:02:28.1675231Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:02:28.1675714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 283, in forward 2025-08-14T22:02:28.1676116Z x = self.pointwise(x) 2025-08-14T22:02:28.1676229Z 2025-08-14T22:02:28.1676316Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1676551Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1676914Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1677242Z return mod(**inputs) 2025-08-14T22:02:28.1677610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1694454Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1695002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1695458Z hidden_states = self.encoder( 2025-08-14T22:02:28.1695905Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1696445Z layer_outputs = layer_module( 2025-08-14T22:02:28.1696833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1697227Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1697688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1698099Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1698510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1698920Z self_outputs = self.self( 2025-08-14T22:02:28.1699353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 362, in forward 2025-08-14T22:02:28.1699924Z conv_kernel_layer = self.conv_kernel_layer(conv_attn_layer) 2025-08-14T22:02:28.1700126Z 2025-08-14T22:02:28.1700223Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1700459Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1700713Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1701110Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1701470Z return mod(**inputs) 2025-08-14T22:02:28.1701893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1702329Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1702775Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1703208Z hidden_states = self.encoder( 2025-08-14T22:02:28.1703624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1704056Z layer_outputs = layer_module( 2025-08-14T22:02:28.1704434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1704824Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1705253Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1705689Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1706099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1706502Z self_outputs = self.self( 2025-08-14T22:02:28.1706889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 380, in forward 2025-08-14T22:02:28.1707349Z conv_out_layer = torch.matmul(conv_out_layer, conv_kernel_layer) 2025-08-14T22:02:28.1707529Z 2025-08-14T22:02:28.1707624Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1707861Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1708233Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1708570Z return mod(**inputs) 2025-08-14T22:02:28.1708950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1709357Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1709766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1710170Z hidden_states = self.encoder( 2025-08-14T22:02:28.1710567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1710991Z layer_outputs = layer_module( 2025-08-14T22:02:28.1711387Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1711778Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1712202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1712662Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1713097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1713516Z self_outputs = self.self( 2025-08-14T22:02:28.1713937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 405, in forward 2025-08-14T22:02:28.1714404Z context_layer = torch.cat([context_layer, conv_out], 2) 2025-08-14T22:02:28.1714605Z 2025-08-14T22:02:28.1714701Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1714922Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1715179Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1715570Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1715923Z return mod(**inputs) 2025-08-14T22:02:28.1716321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1716760Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1717201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1717648Z hidden_states = self.encoder( 2025-08-14T22:02:28.1718070Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1718504Z layer_outputs = layer_module( 2025-08-14T22:02:28.1718878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1719261Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1719693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 586, in forward 2025-08-14T22:02:28.1720137Z layer_output = apply_chunking_to_forward( 2025-08-14T22:02:28.1720566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:02:28.1720994Z return forward_fn(*input_tensors) 2025-08-14T22:02:28.1721466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 593, in feed_forward_chunk 2025-08-14T22:02:28.1721985Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:02:28.1722457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 514, in forward 2025-08-14T22:02:28.1722931Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:02:28.1723354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:02:28.1723722Z return self.act(input) 2025-08-14T22:02:28.1723842Z 2025-08-14T22:02:28.1723929Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1724158Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1724382Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1724595Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1724826Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1725037Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1725238Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1725447Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1725689Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1726087Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1726418Z return mod(**inputs) 2025-08-14T22:02:28.1726803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1727238Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1727644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1728048Z hidden_states = self.encoder( 2025-08-14T22:02:28.1728463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1728865Z layer_outputs = layer_module( 2025-08-14T22:02:28.1729207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1729599Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1730011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1730427Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1730843Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1731273Z self_outputs = self.self( 2025-08-14T22:02:28.1731683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:02:28.1732201Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:02:28.1732722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-08-14T22:02:28.1733158Z x = self.depthwise(hidden_states) 2025-08-14T22:02:28.1733298Z 2025-08-14T22:02:28.1733423Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1733801Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1734153Z return mod(**inputs) 2025-08-14T22:02:28.1734555Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1734993Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1735416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1735839Z hidden_states = self.encoder( 2025-08-14T22:02:28.1736265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1736684Z layer_outputs = layer_module( 2025-08-14T22:02:28.1737056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1737443Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1737872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1738305Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1738736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1739174Z self_outputs = self.self( 2025-08-14T22:02:28.1739652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:02:28.1740173Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:02:28.1740693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 283, in forward 2025-08-14T22:02:28.1741145Z x = self.pointwise(x) 2025-08-14T22:02:28.1741269Z 2025-08-14T22:02:28.1741358Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1741623Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1742308Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1742774Z return mod(**inputs) 2025-08-14T22:02:28.1743171Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1743609Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1744083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1744516Z hidden_states = self.encoder( 2025-08-14T22:02:28.1744971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1745392Z layer_outputs = layer_module( 2025-08-14T22:02:28.1745763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1746141Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1746570Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1747004Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1747425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1747854Z self_outputs = self.self( 2025-08-14T22:02:28.1748263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 362, in forward 2025-08-14T22:02:28.1748737Z conv_kernel_layer = self.conv_kernel_layer(conv_attn_layer) 2025-08-14T22:02:28.1748921Z 2025-08-14T22:02:28.1749006Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1749235Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1749488Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1749857Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1750173Z return mod(**inputs) 2025-08-14T22:02:28.1750547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1750958Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1751356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1751756Z hidden_states = self.encoder( 2025-08-14T22:02:28.1752153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1752551Z layer_outputs = layer_module( 2025-08-14T22:02:28.1752893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1753254Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1753655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1754054Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1754463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1754861Z self_outputs = self.self( 2025-08-14T22:02:28.1755246Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 380, in forward 2025-08-14T22:02:28.1755698Z conv_out_layer = torch.matmul(conv_out_layer, conv_kernel_layer) 2025-08-14T22:02:28.1755888Z 2025-08-14T22:02:28.1756047Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1756291Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1756653Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1756995Z return mod(**inputs) 2025-08-14T22:02:28.1757385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1757788Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1758182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1758590Z hidden_states = self.encoder( 2025-08-14T22:02:28.1758981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1759387Z layer_outputs = layer_module( 2025-08-14T22:02:28.1759723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1760083Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1760478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1760881Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1761265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1761661Z self_outputs = self.self( 2025-08-14T22:02:28.1762051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 405, in forward 2025-08-14T22:02:28.1762481Z context_layer = torch.cat([context_layer, conv_out], 2) 2025-08-14T22:02:28.1762668Z 2025-08-14T22:02:28.1762744Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1762951Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1763182Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1763531Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1763850Z return mod(**inputs) 2025-08-14T22:02:28.1764219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1764609Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1765007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1765405Z hidden_states = self.encoder( 2025-08-14T22:02:28.1765802Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1766187Z layer_outputs = layer_module( 2025-08-14T22:02:28.1766538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1766903Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1767311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 586, in forward 2025-08-14T22:02:28.1767727Z layer_output = apply_chunking_to_forward( 2025-08-14T22:02:28.1768140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:02:28.1768539Z return forward_fn(*input_tensors) 2025-08-14T22:02:28.1768966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 593, in feed_forward_chunk 2025-08-14T22:02:28.1769451Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:02:28.1769926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 514, in forward 2025-08-14T22:02:28.1770367Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:02:28.1770753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:02:28.1771123Z return self.act(input) 2025-08-14T22:02:28.1771238Z 2025-08-14T22:02:28.1771329Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1771536Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1771747Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1771957Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1772165Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1772382Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1772591Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1772801Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1773050Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1773417Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1773747Z return mod(**inputs) 2025-08-14T22:02:28.1774114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1774530Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1774935Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1775330Z hidden_states = self.encoder( 2025-08-14T22:02:28.1775714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1776113Z layer_outputs = layer_module( 2025-08-14T22:02:28.1776459Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1776818Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1777211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1777616Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1778018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1778407Z self_outputs = self.self( 2025-08-14T22:02:28.1778790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:02:28.1779277Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:02:28.1779864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-08-14T22:02:28.1780305Z x = self.depthwise(hidden_states) 2025-08-14T22:02:28.1780455Z 2025-08-14T22:02:28.1780569Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1780960Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1781288Z return mod(**inputs) 2025-08-14T22:02:28.1781659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1782078Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1782489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1782886Z hidden_states = self.encoder( 2025-08-14T22:02:28.1783290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1783695Z layer_outputs = layer_module( 2025-08-14T22:02:28.1784077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1784434Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1784849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1785280Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1785678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1786077Z self_outputs = self.self( 2025-08-14T22:02:28.1786483Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:02:28.1786984Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:02:28.1787469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 283, in forward 2025-08-14T22:02:28.1787872Z x = self.pointwise(x) 2025-08-14T22:02:28.1787992Z 2025-08-14T22:02:28.1788073Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1788316Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1788673Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1789000Z return mod(**inputs) 2025-08-14T22:02:28.1789379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1789777Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1790187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1790586Z hidden_states = self.encoder( 2025-08-14T22:02:28.1790983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1791383Z layer_outputs = layer_module( 2025-08-14T22:02:28.1791725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1792080Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1792469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1792869Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1793266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1793655Z self_outputs = self.self( 2025-08-14T22:02:28.1794023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 362, in forward 2025-08-14T22:02:28.1794461Z conv_kernel_layer = self.conv_kernel_layer(conv_attn_layer) 2025-08-14T22:02:28.1794637Z 2025-08-14T22:02:28.1794717Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1794922Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1795145Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1795495Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1795814Z return mod(**inputs) 2025-08-14T22:02:28.1796168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1796565Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1796962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1797347Z hidden_states = self.encoder( 2025-08-14T22:02:28.1797728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1798140Z layer_outputs = layer_module( 2025-08-14T22:02:28.1798482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1798829Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1799266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1799666Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1800069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1800450Z self_outputs = self.self( 2025-08-14T22:02:28.1800851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 380, in forward 2025-08-14T22:02:28.1801315Z conv_out_layer = torch.matmul(conv_out_layer, conv_kernel_layer) 2025-08-14T22:02:28.1801490Z 2025-08-14T22:02:28.1801576Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1801803Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1802160Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1802486Z return mod(**inputs) 2025-08-14T22:02:28.1802852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1803258Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1803662Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1804054Z hidden_states = self.encoder( 2025-08-14T22:02:28.1804434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1804838Z layer_outputs = layer_module( 2025-08-14T22:02:28.1805190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1805551Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1805952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1806368Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1806780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1807172Z self_outputs = self.self( 2025-08-14T22:02:28.1807568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 405, in forward 2025-08-14T22:02:28.1808018Z context_layer = torch.cat([context_layer, conv_out], 2) 2025-08-14T22:02:28.1808186Z 2025-08-14T22:02:28.1808272Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1808481Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1808720Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1809082Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1809412Z return mod(**inputs) 2025-08-14T22:02:28.1809806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1810222Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1810634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1811032Z hidden_states = self.encoder( 2025-08-14T22:02:28.1811425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1811828Z layer_outputs = layer_module( 2025-08-14T22:02:28.1812198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1812564Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1812968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 586, in forward 2025-08-14T22:02:28.1813401Z layer_output = apply_chunking_to_forward( 2025-08-14T22:02:28.1813802Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:02:28.1814195Z return forward_fn(*input_tensors) 2025-08-14T22:02:28.1814644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 593, in feed_forward_chunk 2025-08-14T22:02:28.1815125Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:02:28.1815588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 514, in forward 2025-08-14T22:02:28.1816029Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:02:28.1816412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:02:28.1816750Z return self.act(input) 2025-08-14T22:02:28.1816872Z 2025-08-14T22:02:28.1816953Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1817171Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1817392Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1817603Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1817823Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1818043Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1818254Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1818473Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1818725Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1819109Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1819457Z return mod(**inputs) 2025-08-14T22:02:28.1819944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1820389Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1820817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1821243Z hidden_states = self.encoder( 2025-08-14T22:02:28.1821673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1822097Z layer_outputs = layer_module( 2025-08-14T22:02:28.1822453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1822824Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1823238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1823665Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1824104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1824545Z self_outputs = self.self( 2025-08-14T22:02:28.1824957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:02:28.1825465Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:02:28.1826000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-08-14T22:02:28.1826430Z x = self.depthwise(hidden_states) 2025-08-14T22:02:28.1826569Z 2025-08-14T22:02:28.1826725Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1827111Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1827461Z return mod(**inputs) 2025-08-14T22:02:28.1827861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1828309Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1828737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1829169Z hidden_states = self.encoder( 2025-08-14T22:02:28.1829488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1829594Z layer_outputs = layer_module( 2025-08-14T22:02:28.1829835Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1829920Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1830228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1830315Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1830613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1830688Z self_outputs = self.self( 2025-08-14T22:02:28.1831001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:02:28.1831164Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:02:28.1831437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 283, in forward 2025-08-14T22:02:28.1831510Z x = self.pointwise(x) 2025-08-14T22:02:28.1831521Z 2025-08-14T22:02:28.1831601Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1831706Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1831913Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1831980Z return mod(**inputs) 2025-08-14T22:02:28.1832249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1832339Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1832608Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1832686Z hidden_states = self.encoder( 2025-08-14T22:02:28.1832956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1833029Z layer_outputs = layer_module( 2025-08-14T22:02:28.1833258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1833337Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1833632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1833726Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1834021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1834105Z self_outputs = self.self( 2025-08-14T22:02:28.1834399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 362, in forward 2025-08-14T22:02:28.1834527Z conv_kernel_layer = self.conv_kernel_layer(conv_attn_layer) 2025-08-14T22:02:28.1834531Z 2025-08-14T22:02:28.1834650Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1834735Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1834853Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1835069Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1835158Z return mod(**inputs) 2025-08-14T22:02:28.1835452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1835536Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1835840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1835925Z hidden_states = self.encoder( 2025-08-14T22:02:28.1836242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1836317Z layer_outputs = layer_module( 2025-08-14T22:02:28.1836549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1836639Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1836932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1837024Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1837310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1837385Z self_outputs = self.self( 2025-08-14T22:02:28.1837673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 380, in forward 2025-08-14T22:02:28.1837808Z conv_out_layer = torch.matmul(conv_out_layer, conv_kernel_layer) 2025-08-14T22:02:28.1837811Z 2025-08-14T22:02:28.1837894Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1838010Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1838218Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1838297Z return mod(**inputs) 2025-08-14T22:02:28.1838583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1838667Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1838962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1839037Z hidden_states = self.encoder( 2025-08-14T22:02:28.1839325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1839409Z layer_outputs = layer_module( 2025-08-14T22:02:28.1839645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1839735Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1840012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1840098Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1840385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1840459Z self_outputs = self.self( 2025-08-14T22:02:28.1840743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 405, in forward 2025-08-14T22:02:28.1840863Z context_layer = torch.cat([context_layer, conv_out], 2) 2025-08-14T22:02:28.1840869Z 2025-08-14T22:02:28.1840951Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1841076Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1841187Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1841398Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1841497Z return mod(**inputs) 2025-08-14T22:02:28.1842089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1842195Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1842476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1842614Z hidden_states = self.encoder( 2025-08-14T22:02:28.1842916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1843022Z layer_outputs = layer_module( 2025-08-14T22:02:28.1843262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1843354Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1843635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 586, in forward 2025-08-14T22:02:28.1843736Z layer_output = apply_chunking_to_forward( 2025-08-14T22:02:28.1844010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:02:28.1844092Z return forward_fn(*input_tensors) 2025-08-14T22:02:28.1844417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 593, in feed_forward_chunk 2025-08-14T22:02:28.1844547Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:02:28.1844836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 514, in forward 2025-08-14T22:02:28.1844955Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:02:28.1845178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:02:28.1845264Z return self.act(input) 2025-08-14T22:02:28.1845268Z 2025-08-14T22:02:28.1845351Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1845434Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1845524Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1845606Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1845695Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1845776Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1845856Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1845947Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1846060Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1846273Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1846351Z return mod(**inputs) 2025-08-14T22:02:28.1846631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1846727Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1847006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1847082Z hidden_states = self.encoder( 2025-08-14T22:02:28.1847371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1847445Z layer_outputs = layer_module( 2025-08-14T22:02:28.1847676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1847769Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1848097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1848196Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1848500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1848575Z self_outputs = self.self( 2025-08-14T22:02:28.1848874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:02:28.1849057Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:02:28.1849359Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-08-14T22:02:28.1849462Z x = self.depthwise(hidden_states) 2025-08-14T22:02:28.1849466Z 2025-08-14T22:02:28.1849578Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1849794Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1849863Z return mod(**inputs) 2025-08-14T22:02:28.1850140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1850235Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1850516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1850602Z hidden_states = self.encoder( 2025-08-14T22:02:28.1850882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1850958Z layer_outputs = layer_module( 2025-08-14T22:02:28.1851201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1851282Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1851560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1851657Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1851938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1852020Z self_outputs = self.self( 2025-08-14T22:02:28.1852301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:02:28.1852466Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:02:28.1852757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 283, in forward 2025-08-14T22:02:28.1852832Z x = self.pointwise(x) 2025-08-14T22:02:28.1852837Z 2025-08-14T22:02:28.1852928Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1853038Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1853245Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1853326Z return mod(**inputs) 2025-08-14T22:02:28.1853602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1853687Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1853981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1854058Z hidden_states = self.encoder( 2025-08-14T22:02:28.1854346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1854445Z layer_outputs = layer_module( 2025-08-14T22:02:28.1854681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1854772Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1855070Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1855164Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1855443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1855535Z self_outputs = self.self( 2025-08-14T22:02:28.1855821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 362, in forward 2025-08-14T22:02:28.1855982Z conv_kernel_layer = self.conv_kernel_layer(conv_attn_layer) 2025-08-14T22:02:28.1855987Z 2025-08-14T22:02:28.1856071Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1856161Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1856268Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1856487Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1856557Z return mod(**inputs) 2025-08-14T22:02:28.1856835Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1856928Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1857209Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1857285Z hidden_states = self.encoder( 2025-08-14T22:02:28.1857569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1857644Z layer_outputs = layer_module( 2025-08-14T22:02:28.1857884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1857964Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1858247Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1858343Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1858621Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1858704Z self_outputs = self.self( 2025-08-14T22:02:28.1858981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 380, in forward 2025-08-14T22:02:28.1859117Z conv_out_layer = torch.matmul(conv_out_layer, conv_kernel_layer) 2025-08-14T22:02:28.1859120Z 2025-08-14T22:02:28.1859211Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1859324Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1859606Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1859697Z return mod(**inputs) 2025-08-14T22:02:28.1859987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1860084Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1860387Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1860464Z hidden_states = self.encoder( 2025-08-14T22:02:28.1860759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1860846Z layer_outputs = layer_module( 2025-08-14T22:02:28.1861120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1861204Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1861483Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1861605Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1861885Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1861959Z self_outputs = self.self( 2025-08-14T22:02:28.1862270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 405, in forward 2025-08-14T22:02:28.1862393Z context_layer = torch.cat([context_layer, conv_out], 2) 2025-08-14T22:02:28.1862448Z 2025-08-14T22:02:28.1862538Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1862623Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1862730Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1862948Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1863019Z return mod(**inputs) 2025-08-14T22:02:28.1863295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1863389Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1863666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1863751Z hidden_states = self.encoder( 2025-08-14T22:02:28.1864031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1864106Z layer_outputs = layer_module( 2025-08-14T22:02:28.1864349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1864430Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1864715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 586, in forward 2025-08-14T22:02:28.1864802Z layer_output = apply_chunking_to_forward( 2025-08-14T22:02:28.1865065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:02:28.1865148Z return forward_fn(*input_tensors) 2025-08-14T22:02:28.1865448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 593, in feed_forward_chunk 2025-08-14T22:02:28.1865574Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:02:28.1865861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 514, in forward 2025-08-14T22:02:28.1865977Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:02:28.1866201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:02:28.1866273Z return self.act(input) 2025-08-14T22:02:28.1866277Z 2025-08-14T22:02:28.1866355Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1866441Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1866516Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1866591Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1866677Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1866753Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1866836Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1866912Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1867014Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1867248Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1867316Z return mod(**inputs) 2025-08-14T22:02:28.1867583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1867694Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1867960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1868040Z hidden_states = self.encoder( 2025-08-14T22:02:28.1868320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1868393Z layer_outputs = layer_module( 2025-08-14T22:02:28.1868622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1868717Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1868982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1869071Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1869341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1869420Z self_outputs = self.self( 2025-08-14T22:02:28.1869681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:02:28.1869840Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:02:28.1870113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-08-14T22:02:28.1870192Z x = self.depthwise(hidden_states) 2025-08-14T22:02:28.1870196Z 2025-08-14T22:02:28.1870309Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1870507Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1870573Z return mod(**inputs) 2025-08-14T22:02:28.1870845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1870926Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1871190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1871268Z hidden_states = self.encoder( 2025-08-14T22:02:28.1871534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1871613Z layer_outputs = layer_module( 2025-08-14T22:02:28.1871837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1871914Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1872185Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1872267Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1872536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1872605Z self_outputs = self.self( 2025-08-14T22:02:28.1872873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:02:28.1873037Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:02:28.1873308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 283, in forward 2025-08-14T22:02:28.1873396Z x = self.pointwise(x) 2025-08-14T22:02:28.1873408Z 2025-08-14T22:02:28.1873487Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1873592Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1873797Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1873882Z return mod(**inputs) 2025-08-14T22:02:28.1874150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1874238Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1874534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1874614Z hidden_states = self.encoder( 2025-08-14T22:02:28.1874904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1874977Z layer_outputs = layer_module( 2025-08-14T22:02:28.1875203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1875279Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1875546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1875633Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1875901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1875978Z self_outputs = self.self( 2025-08-14T22:02:28.1876245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 362, in forward 2025-08-14T22:02:28.1876367Z conv_kernel_layer = self.conv_kernel_layer(conv_attn_layer) 2025-08-14T22:02:28.1876371Z 2025-08-14T22:02:28.1876459Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1876536Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1876639Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1876844Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1876912Z return mod(**inputs) 2025-08-14T22:02:28.1877186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1877268Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1877536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1877617Z hidden_states = self.encoder( 2025-08-14T22:02:28.1877888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1877968Z layer_outputs = layer_module( 2025-08-14T22:02:28.1878187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1878266Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1878541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1878623Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1878889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1878967Z self_outputs = self.self( 2025-08-14T22:02:28.1879235Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 380, in forward 2025-08-14T22:02:28.1879369Z conv_out_layer = torch.matmul(conv_out_layer, conv_kernel_layer) 2025-08-14T22:02:28.1879373Z 2025-08-14T22:02:28.1879471Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1879578Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1879785Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1879879Z return mod(**inputs) 2025-08-14T22:02:28.1880152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1880233Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1880508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1880587Z hidden_states = self.encoder( 2025-08-14T22:02:28.1880850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1880937Z layer_outputs = layer_module( 2025-08-14T22:02:28.1881166Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1881242Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1881517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1881599Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1881861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1881938Z self_outputs = self.self( 2025-08-14T22:02:28.1882205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 405, in forward 2025-08-14T22:02:28.1882315Z context_layer = torch.cat([context_layer, conv_out], 2) 2025-08-14T22:02:28.1882327Z 2025-08-14T22:02:28.1882405Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1882482Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1882591Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1882787Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1882854Z return mod(**inputs) 2025-08-14T22:02:28.1883125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1883203Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1883475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1883546Z hidden_states = self.encoder( 2025-08-14T22:02:28.1883809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1883888Z layer_outputs = layer_module( 2025-08-14T22:02:28.1884107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1884183Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1884454Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 586, in forward 2025-08-14T22:02:28.1884540Z layer_output = apply_chunking_to_forward( 2025-08-14T22:02:28.1884804Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:02:28.1884879Z return forward_fn(*input_tensors) 2025-08-14T22:02:28.1885177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 593, in feed_forward_chunk 2025-08-14T22:02:28.1885304Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:02:28.1885594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 514, in forward 2025-08-14T22:02:28.1885714Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:02:28.1885922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:02:28.1886010Z return self.act(input) 2025-08-14T22:02:28.1886014Z 2025-08-14T22:02:28.1886098Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1886174Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1886248Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1886331Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1886405Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1886502Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1886585Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1886675Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1886783Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1886979Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1887043Z return mod(**inputs) 2025-08-14T22:02:28.1887306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1887389Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1887644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1887722Z hidden_states = self.encoder( 2025-08-14T22:02:28.1887978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1888052Z layer_outputs = layer_module( 2025-08-14T22:02:28.1888262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1888339Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1888601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1888682Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1888947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1889014Z self_outputs = self.self( 2025-08-14T22:02:28.1889267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:02:28.1889426Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:02:28.1889681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-08-14T22:02:28.1889756Z x = self.depthwise(hidden_states) 2025-08-14T22:02:28.1889767Z 2025-08-14T22:02:28.1889868Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1890058Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1890129Z return mod(**inputs) 2025-08-14T22:02:28.1890388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1890468Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1890729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1890798Z hidden_states = self.encoder( 2025-08-14T22:02:28.1891060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1891129Z layer_outputs = layer_module( 2025-08-14T22:02:28.1891359Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1891443Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1891698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1891794Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1892063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1892130Z self_outputs = self.self( 2025-08-14T22:02:28.1892418Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-08-14T22:02:28.1892569Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-08-14T22:02:28.1892882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 283, in forward 2025-08-14T22:02:28.1892960Z x = self.pointwise(x) 2025-08-14T22:02:28.1892964Z 2025-08-14T22:02:28.1893041Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1893152Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1893348Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1893417Z return mod(**inputs) 2025-08-14T22:02:28.1893694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1893775Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1894042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1894123Z hidden_states = self.encoder( 2025-08-14T22:02:28.1894391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1894469Z layer_outputs = layer_module( 2025-08-14T22:02:28.1894689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1894766Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1895041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1895122Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1895389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1895467Z self_outputs = self.self( 2025-08-14T22:02:28.1895745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 362, in forward 2025-08-14T22:02:28.1895873Z conv_kernel_layer = self.conv_kernel_layer(conv_attn_layer) 2025-08-14T22:02:28.1895877Z 2025-08-14T22:02:28.1895956Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1896036Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1896147Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1896344Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1896420Z return mod(**inputs) 2025-08-14T22:02:28.1896694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1896775Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1897051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1897124Z hidden_states = self.encoder( 2025-08-14T22:02:28.1897391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1897487Z layer_outputs = layer_module( 2025-08-14T22:02:28.1897712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1897799Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1898096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1898177Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1898465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1898555Z self_outputs = self.self( 2025-08-14T22:02:28.1898847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 380, in forward 2025-08-14T22:02:28.1899008Z conv_out_layer = torch.matmul(conv_out_layer, conv_kernel_layer) 2025-08-14T22:02:28.1899012Z 2025-08-14T22:02:28.1899094Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1899211Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1899423Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1899494Z return mod(**inputs) 2025-08-14T22:02:28.1899884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1899975Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1900267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1900345Z hidden_states = self.encoder( 2025-08-14T22:02:28.1900636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1900725Z layer_outputs = layer_module( 2025-08-14T22:02:28.1900962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1901041Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1901316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-08-14T22:02:28.1901401Z self_attention_outputs = self.attention( 2025-08-14T22:02:28.1901681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-08-14T22:02:28.1901752Z self_outputs = self.self( 2025-08-14T22:02:28.1902021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 405, in forward 2025-08-14T22:02:28.1902142Z context_layer = torch.cat([context_layer, conv_out], 2) 2025-08-14T22:02:28.1902148Z 2025-08-14T22:02:28.1902288Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1902374Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1902479Z cudagraph partition due to non gpu ops. Found from : 2025-08-14T22:02:28.1902676Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 532, in forward_pass 2025-08-14T22:02:28.1902750Z return mod(**inputs) 2025-08-14T22:02:28.1903023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-08-14T22:02:28.1903109Z generator_hidden_states = self.convbert( 2025-08-14T22:02:28.1903373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-08-14T22:02:28.1903445Z hidden_states = self.encoder( 2025-08-14T22:02:28.1903718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-08-14T22:02:28.1903790Z layer_outputs = layer_module( 2025-08-14T22:02:28.1904038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-08-14T22:02:28.1904124Z return super().__call__(*args, **kwargs) 2025-08-14T22:02:28.1904393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 586, in forward 2025-08-14T22:02:28.1904505Z layer_output = apply_chunking_to_forward( 2025-08-14T22:02:28.1904765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-08-14T22:02:28.1904844Z return forward_fn(*input_tensors) 2025-08-14T22:02:28.1905166Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 593, in feed_forward_chunk 2025-08-14T22:02:28.1905292Z intermediate_output = self.intermediate(attention_output) 2025-08-14T22:02:28.1905581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 514, in forward 2025-08-14T22:02:28.1905692Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-08-14T22:02:28.1905904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 69, in forward 2025-08-14T22:02:28.1905982Z return self.act(input) 2025-08-14T22:02:28.1905986Z 2025-08-14T22:02:28.1906063Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1906139Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1906222Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1906298Z cudagraph partition due to non gpu ops 2025-08-14T22:02:28.1906373Z cudagraph partition due to non gpu ops 2025-08-14T22:02:38.9281596Z Compilation time (from dynamo_timed): 25.861546025 2025-08-14T22:02:38.9344486Z pass 2025-08-14T22:02:38.9349268Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-08-14T22:02:38.9354587Z TIMING: _recursive_pre_grad_passes:0.06021 _recursive_joint_graph_passes:0.63366 _recursive_post_grad_passes:0.17432 async_compile.wait:0.82467 code_gen:10.7249 inductor_compile:13.07785 backend_compile:21.40975 gc:0.00016 entire_frame_compile:25.86155 total_wall_time:25.86155 2025-08-14T22:02:38.9360183Z STATS: call_* op count: 634 | FakeTensorMode.__torch_dispatch__:45792 | FakeTensor.__torch_dispatch__:6043 | ProxyTorchDispatchMode.__torch_dispatch__:9702 2025-08-14T22:02:38.9364916Z Dynamo produced 1 graphs covering 634 ops with 0 graph breaks (0 unique) 2025-08-14T22:02:41.3175756Z accuracy pass_rate=95.35% 2025-08-14T22:02:41.3176225Z calls_captured gmean=0.00x mean=609.233x 2025-08-14T22:02:41.3176517Z unique_graphs gmean=0.00x mean=1.093x 2025-08-14T22:02:41.3178897Z graph_breaks gmean=0.00x mean=0.140x 2025-08-14T22:02:41.3184663Z unique_graph_breaks gmean=0.00x mean=0.047x 2025-08-14T22:02:41.3185046Z autograd_captures gmean=0.00x mean=0.000x 2025-08-14T22:02:41.3195929Z autograd_compiles gmean=0.00x mean=0.000x 2025-08-14T22:02:41.3196277Z cudagraph_skips gmean=0.00x mean=1.093x 2025-08-14T22:02:41.3196526Z compilation_latency mean=23.461 seconds 2025-08-14T22:02:42.4347296Z + python benchmarks/dynamo/check_accuracy.py --actual /var/lib/jenkins/workspace/test/test-reports/inference_huggingface.csv --expected benchmarks/dynamo/ci_expected_accuracy/cpu_inductor_freezing_huggingface_inference.csv 2025-08-14T22:02:42.7295446Z AlbertForMaskedLM PASS 2025-08-14T22:02:42.7301098Z AlbertForQuestionAnswering PASS 2025-08-14T22:02:42.7301526Z AllenaiLongformerBase PASS 2025-08-14T22:02:42.7301817Z BartForCausalLM PASS 2025-08-14T22:02:42.7307895Z BartForConditionalGeneration PASS 2025-08-14T22:02:42.7313172Z BertForMaskedLM PASS 2025-08-14T22:02:42.7315374Z BertForQuestionAnswering PASS 2025-08-14T22:02:42.7321578Z BlenderbotForCausalLM XFAIL 2025-08-14T22:02:42.7326983Z BlenderbotSmallForCausalLM PASS 2025-08-14T22:02:42.7329352Z BlenderbotSmallForConditionalGeneration PASS 2025-08-14T22:02:42.7329756Z CamemBert PASS 2025-08-14T22:02:42.7335429Z DebertaV2ForMaskedLM XFAIL 2025-08-14T22:02:42.7335870Z DebertaV2ForQuestionAnswering PASS 2025-08-14T22:02:42.7336449Z DistilBertForMaskedLM PASS 2025-08-14T22:02:42.7337277Z DistilBertForQuestionAnswering PASS 2025-08-14T22:02:42.7337711Z DistillGPT2 PASS 2025-08-14T22:02:42.7338034Z ElectraForCausalLM PASS 2025-08-14T22:02:42.7342655Z ElectraForQuestionAnswering PASS 2025-08-14T22:02:42.7344324Z GPT2ForSequenceClassification PASS 2025-08-14T22:02:42.7357229Z GoogleFnet PASS 2025-08-14T22:02:42.7359230Z LayoutLMForMaskedLM PASS 2025-08-14T22:02:42.7359815Z LayoutLMForSequenceClassification PASS 2025-08-14T22:02:42.7362772Z M2M100ForConditionalGeneration PASS 2025-08-14T22:02:42.7370331Z MBartForCausalLM PASS 2025-08-14T22:02:42.7372394Z MBartForConditionalGeneration PASS 2025-08-14T22:02:42.7372844Z MT5ForConditionalGeneration PASS 2025-08-14T22:02:42.7373163Z MegatronBertForCausalLM PASS 2025-08-14T22:02:42.7376770Z MegatronBertForQuestionAnswering PASS 2025-08-14T22:02:42.7377204Z MobileBertForMaskedLM PASS 2025-08-14T22:02:42.7386414Z MobileBertForQuestionAnswering PASS 2025-08-14T22:02:42.7391835Z OPTForCausalLM PASS 2025-08-14T22:02:42.7394308Z PLBartForCausalLM PASS 2025-08-14T22:02:42.7397560Z PLBartForConditionalGeneration PASS 2025-08-14T22:02:42.7397954Z PegasusForCausalLM PASS 2025-08-14T22:02:42.7402899Z PegasusForConditionalGeneration PASS 2025-08-14T22:02:42.7409043Z RobertaForCausalLM PASS 2025-08-14T22:02:42.7411424Z RobertaForQuestionAnswering PASS 2025-08-14T22:02:42.7411890Z T5ForConditionalGeneration PASS 2025-08-14T22:02:42.7412282Z T5Small PASS 2025-08-14T22:02:42.7412529Z TrOCRForCausalLM PASS 2025-08-14T22:02:42.7412835Z XGLMForCausalLM PASS 2025-08-14T22:02:42.7418961Z XLNetLMHeadModel PASS 2025-08-14T22:02:42.7419399Z YituTechConvBert PASS 2025-08-14T22:02:42.7937427Z + python benchmarks/dynamo/check_graph_breaks.py --actual /var/lib/jenkins/workspace/test/test-reports/inference_huggingface.csv --expected benchmarks/dynamo/ci_expected_accuracy/cpu_inductor_freezing_huggingface_inference.csv 2025-08-14T22:02:43.0723148Z AlbertForMaskedLM PASS 2025-08-14T22:02:43.0727925Z AlbertForQuestionAnswering PASS 2025-08-14T22:02:43.0729543Z AllenaiLongformerBase PASS 2025-08-14T22:02:43.0730321Z BartForCausalLM PASS 2025-08-14T22:02:43.0730771Z BartForConditionalGeneration PASS 2025-08-14T22:02:43.0734748Z BertForMaskedLM PASS 2025-08-14T22:02:43.0740145Z BertForQuestionAnswering PASS 2025-08-14T22:02:43.0744440Z BlenderbotForCausalLM PASS 2025-08-14T22:02:43.0748667Z BlenderbotSmallForCausalLM PASS 2025-08-14T22:02:43.0754011Z BlenderbotSmallForConditionalGeneration PASS 2025-08-14T22:02:43.0756063Z CamemBert PASS 2025-08-14T22:02:43.0756329Z DebertaV2ForMaskedLM PASS 2025-08-14T22:02:43.0756574Z DebertaV2ForQuestionAnswering PASS 2025-08-14T22:02:43.0756811Z DistilBertForMaskedLM PASS 2025-08-14T22:02:43.0765877Z DistilBertForQuestionAnswering PASS 2025-08-14T22:02:43.0770173Z DistillGPT2 PASS 2025-08-14T22:02:43.0775011Z ElectraForCausalLM PASS 2025-08-14T22:02:43.0775315Z ElectraForQuestionAnswering PASS 2025-08-14T22:02:43.0778298Z GPT2ForSequenceClassification PASS 2025-08-14T22:02:43.0778978Z GoogleFnet PASS 2025-08-14T22:02:43.0783050Z LayoutLMForMaskedLM PASS 2025-08-14T22:02:43.0788736Z LayoutLMForSequenceClassification PASS 2025-08-14T22:02:43.0789072Z M2M100ForConditionalGeneration PASS 2025-08-14T22:02:43.0792641Z MBartForCausalLM PASS 2025-08-14T22:02:43.0795400Z MBartForConditionalGeneration PASS 2025-08-14T22:02:43.0803138Z MT5ForConditionalGeneration PASS 2025-08-14T22:02:43.0803410Z MegatronBertForCausalLM PASS 2025-08-14T22:02:43.0807275Z MegatronBertForQuestionAnswering PASS 2025-08-14T22:02:43.0811110Z MobileBertForMaskedLM PASS 2025-08-14T22:02:43.0811458Z MobileBertForQuestionAnswering PASS 2025-08-14T22:02:43.0818548Z OPTForCausalLM PASS 2025-08-14T22:02:43.0818869Z PLBartForCausalLM PASS 2025-08-14T22:02:43.0825507Z PLBartForConditionalGeneration PASS 2025-08-14T22:02:43.0831499Z PegasusForCausalLM PASS 2025-08-14T22:02:43.0838111Z PegasusForConditionalGeneration PASS 2025-08-14T22:02:43.0838590Z RobertaForCausalLM PASS 2025-08-14T22:02:43.0838944Z RobertaForQuestionAnswering PASS 2025-08-14T22:02:43.0839311Z T5ForConditionalGeneration PASS 2025-08-14T22:02:43.0842004Z T5Small PASS 2025-08-14T22:02:43.0848389Z TrOCRForCausalLM PASS 2025-08-14T22:02:43.0852765Z XGLMForCausalLM PASS_BUT_FLAKY 2025-08-14T22:02:43.0853087Z XLNetLMHeadModel PASS 2025-08-14T22:02:43.0855708Z YituTechConvBert PASS 2025-08-14T22:02:43.1424506Z + sccache_epilogue 2025-08-14T22:02:43.1424870Z + echo '::group::Sccache Compilation Log' 2025-08-14T22:02:43.1425457Z ##[group]Sccache Compilation Log 2025-08-14T22:02:43.1425718Z + echo '=================== sccache compilation log ===================' 2025-08-14T22:02:43.1426011Z =================== sccache compilation log =================== 2025-08-14T22:02:43.1426438Z + python /var/lib/jenkins/workspace/.ci/pytorch/print_sccache_log.py /var/lib/jenkins/sccache_error.log 2025-08-14T22:02:43.1661167Z + echo '=========== If your build fails, please take a look at the log above for possible reasons ===========' 2025-08-14T22:02:43.1661925Z =========== If your build fails, please take a look at the log above for possible reasons =========== 2025-08-14T22:02:43.1662300Z + sccache --show-stats 2025-08-14T22:02:43.1699262Z Compile requests 379 2025-08-14T22:02:43.1699885Z Compile requests executed 0 2025-08-14T22:02:43.1700131Z Cache hits 0 2025-08-14T22:02:43.1700341Z Cache misses 0 2025-08-14T22:02:43.1700576Z Cache hits rate - 2025-08-14T22:02:43.1700820Z Cache timeouts 0 2025-08-14T22:02:43.1701027Z Cache read errors 0 2025-08-14T22:02:43.1701248Z Forced recaches 0 2025-08-14T22:02:43.1701468Z Cache write errors 0 2025-08-14T22:02:43.1701689Z Cache errors 0 2025-08-14T22:02:43.1701909Z Compilations 0 2025-08-14T22:02:43.1702124Z Compilation failures 0 2025-08-14T22:02:43.1702366Z Non-cacheable compilations 0 2025-08-14T22:02:43.1702584Z Non-cacheable calls 41 2025-08-14T22:02:43.1702809Z Non-compilation calls 338 2025-08-14T22:02:43.1703031Z Unsupported compiler calls 0 2025-08-14T22:02:43.1703253Z Average cache write 0.000 s 2025-08-14T22:02:43.1703477Z Average compiler 0.000 s 2025-08-14T22:02:43.1703695Z Average cache read hit 0.000 s 2025-08-14T22:02:43.1703911Z Failed distributed compilations 0 2025-08-14T22:02:43.1704069Z 2025-08-14T22:02:43.1704146Z Non-cacheable reasons: 2025-08-14T22:02:43.1704335Z -E 41 2025-08-14T22:02:43.1704474Z 2025-08-14T22:02:43.1704649Z Cache location s3, name: ossci-compiler-cache-circleci-v2, prefix: / 2025-08-14T22:02:43.1705252Z Version (client) 0.10.0 2025-08-14T22:02:43.1705483Z + sccache --stop-server 2025-08-14T22:02:43.1722638Z Stopping sccache server... 2025-08-14T22:02:43.1728371Z Compile requests 379 2025-08-14T22:02:43.1728816Z Compile requests executed 0 2025-08-14T22:02:43.1729458Z Cache hits 0 2025-08-14T22:02:43.1730373Z Cache misses 0 2025-08-14T22:02:43.1730703Z Cache hits rate - 2025-08-14T22:02:43.1730939Z Cache timeouts 0 2025-08-14T22:02:43.1731151Z Cache read errors 0 2025-08-14T22:02:43.1731637Z Forced recaches 0 2025-08-14T22:02:43.1731861Z Cache write errors 0 2025-08-14T22:02:43.1732075Z Cache errors 0 2025-08-14T22:02:43.1732344Z Compilations 0 2025-08-14T22:02:43.1732566Z Compilation failures 0 2025-08-14T22:02:43.1732797Z Non-cacheable compilations 0 2025-08-14T22:02:43.1733020Z Non-cacheable calls 41 2025-08-14T22:02:43.1733232Z Non-compilation calls 338 2025-08-14T22:02:43.1733455Z Unsupported compiler calls 0 2025-08-14T22:02:43.1733699Z Average cache write 0.000 s 2025-08-14T22:02:43.1733926Z Average compiler 0.000 s 2025-08-14T22:02:43.1734154Z Average cache read hit 0.000 s 2025-08-14T22:02:43.1734388Z Failed distributed compilations 0 2025-08-14T22:02:43.1734532Z 2025-08-14T22:02:43.1734613Z Non-cacheable reasons: 2025-08-14T22:02:43.1734815Z -E 41 2025-08-14T22:02:43.1734950Z 2025-08-14T22:02:43.1735135Z Cache location s3, name: ossci-compiler-cache-circleci-v2, prefix: / 2025-08-14T22:02:43.1735454Z Version (client) 0.10.0 2025-08-14T22:02:43.1735739Z + echo ::endgroup:: 2025-08-14T22:02:43.1736185Z ##[endgroup] 2025-08-14T22:02:43.1736363Z + cleanup_workspace 2025-08-14T22:02:43.1736709Z + echo 'sudo may print the following warning message that can be ignored. The chown command will still run.' 2025-08-14T22:02:43.1737204Z sudo may print the following warning message that can be ignored. The chown command will still run. 2025-08-14T22:02:43.1737617Z + echo ' sudo: setrlimit(RLIMIT_STACK): Operation not permitted' 2025-08-14T22:02:43.1737936Z sudo: setrlimit(RLIMIT_STACK): Operation not permitted 2025-08-14T22:02:43.1738310Z + echo 'For more details refer to https://github.com/sudo-project/sudo/issues/42' 2025-08-14T22:02:43.1738691Z For more details refer to https://github.com/sudo-project/sudo/issues/42 2025-08-14T22:02:43.1739018Z + sudo chown -R 1000 /var/lib/jenkins/workspace 2025-08-14T22:02:43.6167934Z ##[group]Run pytorch/test-infra/.github/actions/upload-benchmark-results@main 2025-08-14T22:02:43.6168328Z with: 2025-08-14T22:02:43.6168553Z benchmark-results-dir: test/test-reports 2025-08-14T22:02:43.6168813Z dry-run: false 2025-08-14T22:02:43.6169007Z schema-version: v3 2025-08-14T22:02:43.6169443Z github-token: *** 2025-08-14T22:02:43.6169641Z env: 2025-08-14T22:02:43.6169821Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:02:43.6170189Z DOCKER_CONTAINER_ID: a7aa204eccbcf3a0e6886619bb590755eb27ea6b069c8a28798d748638509f93 2025-08-14T22:02:43.6170583Z ##[endgroup] 2025-08-14T22:02:43.6184434Z ##[group]Run set -eux 2025-08-14T22:02:43.6184640Z set -eux 2025-08-14T22:02:43.6184889Z python3 -mpip install boto3==1.35.33 psutil==7.0.0 pynvml==12.0.0 2025-08-14T22:02:43.6185162Z  2025-08-14T22:02:43.6185319Z DEVICE_NAME="" 2025-08-14T22:02:43.6185493Z DEVICE_TYPE="" 2025-08-14T22:02:43.6185663Z  2025-08-14T22:02:43.6185868Z if command -v nvidia-smi; then 2025-08-14T22:02:43.6186166Z  # NB: I'm using PyTorch here to get the device name, however, it needs to 2025-08-14T22:02:43.6186523Z  # install the correct version of PyTorch manually for now. Any PyTorch 2025-08-14T22:02:43.6186855Z  # version is fine, I just use 2.7.1 to satify PYPIDEP linter 2025-08-14T22:02:43.6187135Z  python3 -mpip install torch==2.7.1 2025-08-14T22:02:43.6187362Z elif command -v rocminfo; then 2025-08-14T22:02:43.6187707Z  # NB: Installing torch on ROCm runner with pip here causes CI to fail 2025-08-14T22:02:43.6188057Z  # with a memoryview is too large error only on MI300 runners. Is pip 2025-08-14T22:02:43.6188404Z  # version on ROCm runner there too old? As a workaround, let's use the 2025-08-14T22:02:43.6188705Z  # GPU device name coming from rocminfo instead 2025-08-14T22:02:43.6188943Z  DEVICE_NAME=rocm 2025-08-14T22:02:43.6189256Z  DEVICE_TYPE=$(rocminfo | grep "Marketing Name" | tail -n1 | awk -F':' '{print $2}' | xargs) 2025-08-14T22:02:43.6189644Z fi 2025-08-14T22:02:43.6189789Z  2025-08-14T22:02:43.6189974Z echo "DEVICE_NAME=$DEVICE_NAME" >> $GITHUB_ENV 2025-08-14T22:02:43.6190239Z echo "DEVICE_TYPE=$DEVICE_TYPE" >> $GITHUB_ENV 2025-08-14T22:02:43.6198777Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T22:02:43.6199052Z env: 2025-08-14T22:02:43.6199227Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:02:43.6199544Z DOCKER_CONTAINER_ID: a7aa204eccbcf3a0e6886619bb590755eb27ea6b069c8a28798d748638509f93 2025-08-14T22:02:43.6199883Z ##[endgroup] 2025-08-14T22:02:43.6234322Z + python3 -mpip install boto3==1.35.33 psutil==7.0.0 pynvml==12.0.0 2025-08-14T22:02:43.8066501Z Defaulting to user installation because normal site-packages is not writeable 2025-08-14T22:02:44.5717583Z Collecting boto3==1.35.33 2025-08-14T22:02:44.5890509Z Downloading boto3-1.35.33-py3-none-any.whl (139 kB) 2025-08-14T22:02:44.8259328Z Collecting psutil==7.0.0 2025-08-14T22:02:44.8305866Z Downloading psutil-7.0.0-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (277 kB) 2025-08-14T22:02:44.8575604Z Collecting pynvml==12.0.0 2025-08-14T22:02:44.8619382Z Downloading pynvml-12.0.0-py3-none-any.whl (26 kB) 2025-08-14T22:02:45.7298002Z Collecting botocore<1.36.0,>=1.35.33 2025-08-14T22:02:45.7380546Z Downloading botocore-1.35.99-py3-none-any.whl (13.3 MB) 2025-08-14T22:02:45.8723218Z Collecting s3transfer<0.11.0,>=0.10.0 2025-08-14T22:02:45.8768930Z Downloading s3transfer-0.10.4-py3-none-any.whl (83 kB) 2025-08-14T22:02:45.8820676Z Requirement already satisfied: jmespath<2.0.0,>=0.7.1 in /usr/lib/python3.9/site-packages (from boto3==1.35.33) (0.10.0) 2025-08-14T22:02:45.9168316Z Collecting nvidia-ml-py<13.0.0a0,>=12.0.0 2025-08-14T22:02:45.9214709Z Downloading nvidia_ml_py-12.575.51-py3-none-any.whl (47 kB) 2025-08-14T22:02:45.9300456Z Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /usr/lib/python3.9/site-packages (from botocore<1.36.0,>=1.35.33->boto3==1.35.33) (2.8.1) 2025-08-14T22:02:45.9309476Z Requirement already satisfied: urllib3<1.27,>=1.25.4 in /usr/lib/python3.9/site-packages (from botocore<1.36.0,>=1.35.33->boto3==1.35.33) (1.25.10) 2025-08-14T22:02:46.0693851Z Requirement already satisfied: six>=1.5 in /usr/lib/python3.9/site-packages (from python-dateutil<3.0.0,>=2.1->botocore<1.36.0,>=1.35.33->boto3==1.35.33) (1.15.0) 2025-08-14T22:02:46.1827346Z Installing collected packages: botocore, s3transfer, nvidia-ml-py, pynvml, psutil, boto3 2025-08-14T22:02:46.5450036Z Attempting uninstall: nvidia-ml-py 2025-08-14T22:02:46.5455078Z Found existing installation: nvidia-ml-py 11.525.84 2025-08-14T22:02:46.5459407Z Uninstalling nvidia-ml-py-11.525.84: 2025-08-14T22:02:46.5601804Z Successfully uninstalled nvidia-ml-py-11.525.84 2025-08-14T22:02:46.6176534Z Attempting uninstall: psutil 2025-08-14T22:02:46.6176910Z Found existing installation: psutil 5.9.8 2025-08-14T22:02:46.6230420Z Uninstalling psutil-5.9.8: 2025-08-14T22:02:46.6233547Z Successfully uninstalled psutil-5.9.8 2025-08-14T22:02:46.7635443Z Successfully installed boto3-1.35.33 botocore-1.35.99 nvidia-ml-py-12.575.51 psutil-7.0.0 pynvml-12.0.0 s3transfer-0.10.4 2025-08-14T22:02:46.8935504Z + DEVICE_NAME= 2025-08-14T22:02:46.8935856Z + DEVICE_TYPE= 2025-08-14T22:02:46.8936114Z + command -v nvidia-smi 2025-08-14T22:02:46.8936316Z + command -v rocminfo 2025-08-14T22:02:46.8936748Z + echo DEVICE_NAME= 2025-08-14T22:02:46.8936925Z + echo DEVICE_TYPE= 2025-08-14T22:02:46.8952521Z ##[group]Run set -eux 2025-08-14T22:02:46.8952719Z set -eux 2025-08-14T22:02:46.8952880Z  2025-08-14T22:02:46.8953047Z if [[ -z "${GITHUB_TOKEN}" ]]; then 2025-08-14T22:02:46.8953285Z  echo "Missing github-token input" 2025-08-14T22:02:46.8953494Z  exit 1 2025-08-14T22:02:46.8953653Z fi 2025-08-14T22:02:46.8958977Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T22:02:46.8959295Z env: 2025-08-14T22:02:46.8959450Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:02:46.8959753Z DOCKER_CONTAINER_ID: a7aa204eccbcf3a0e6886619bb590755eb27ea6b069c8a28798d748638509f93 2025-08-14T22:02:46.8960073Z DEVICE_NAME: 2025-08-14T22:02:46.8960240Z DEVICE_TYPE: 2025-08-14T22:02:46.8960606Z GITHUB_TOKEN: *** 2025-08-14T22:02:46.8960786Z ##[endgroup] 2025-08-14T22:02:46.8986718Z + [[ -z *** ]] 2025-08-14T22:02:46.9019461Z ##[group]Run pytorch/test-infra/.github/actions/get-workflow-job-id@main 2025-08-14T22:02:46.9020048Z with: 2025-08-14T22:02:46.9020388Z github-token: *** 2025-08-14T22:02:46.9020583Z env: 2025-08-14T22:02:46.9020761Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:02:46.9021117Z DOCKER_CONTAINER_ID: a7aa204eccbcf3a0e6886619bb590755eb27ea6b069c8a28798d748638509f93 2025-08-14T22:02:46.9021493Z DEVICE_NAME: 2025-08-14T22:02:46.9021675Z DEVICE_TYPE: 2025-08-14T22:02:46.9021882Z ##[endgroup] 2025-08-14T22:02:46.9032555Z ##[group]Run set -eux 2025-08-14T22:02:46.9032762Z set -eux 2025-08-14T22:02:46.9032933Z  2025-08-14T22:02:46.9033244Z python3 "${GITHUB_ACTION_PATH}/../../scripts/get_workflow_job_id.py" "${GITHUB_RUN_ID}" "${RUNNER_NAME}" 2025-08-14T22:02:46.9038393Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T22:02:46.9038691Z env: 2025-08-14T22:02:46.9038882Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:02:46.9039235Z DOCKER_CONTAINER_ID: a7aa204eccbcf3a0e6886619bb590755eb27ea6b069c8a28798d748638509f93 2025-08-14T22:02:46.9039618Z DEVICE_NAME: 2025-08-14T22:02:46.9039807Z DEVICE_TYPE: 2025-08-14T22:02:46.9040143Z GITHUB_TOKEN: *** 2025-08-14T22:02:46.9040342Z ##[endgroup] 2025-08-14T22:02:46.9067203Z + python3 /home/ec2-user/actions-runner/_work/_actions/pytorch/test-infra/main/.github/actions/get-workflow-job-id/../../scripts/get_workflow_job_id.py 16976338999 i-06c8ea4ed8741f176 2025-08-14T22:02:48.3625265Z setting job-id=48128261046 2025-08-14T22:02:48.3625828Z setting job-name=linux-jammy-cpu-py3.9-gcc11-inductor / test (cpu_inductor_freezing_huggingface, 1, 1, linux.8xlarge.amx) 2025-08-14T22:02:48.3743134Z ##[group]Run set -eux 2025-08-14T22:02:48.3743369Z set -eux 2025-08-14T22:02:48.3743546Z  2025-08-14T22:02:48.3743832Z python3 "${GITHUB_ACTION_PATH}/../../scripts/benchmarks/gather_metadata.py" \ 2025-08-14T22:02:48.3744219Z  --schema-version "${SCHEMA_VERSION}" \ 2025-08-14T22:02:48.3744489Z  --repo "${REPO}" \ 2025-08-14T22:02:48.3744711Z  --head-branch "${HEAD_BRANCH}" \ 2025-08-14T22:02:48.3744950Z  --head-sha "${HEAD_SHA}" \ 2025-08-14T22:02:48.3745195Z  --workflow-id "${WORKFLOW_RUN_ID}" \ 2025-08-14T22:02:48.3745442Z  --run-attempt "${RUN_ATTEMPT}" \ 2025-08-14T22:02:48.3745678Z  --job-id "${JOB_ID}" \ 2025-08-14T22:02:48.3745901Z  --job-name "${JOB_NAME}" 2025-08-14T22:02:48.3750902Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T22:02:48.3751173Z env: 2025-08-14T22:02:48.3751347Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:02:48.3751682Z DOCKER_CONTAINER_ID: a7aa204eccbcf3a0e6886619bb590755eb27ea6b069c8a28798d748638509f93 2025-08-14T22:02:48.3752004Z DEVICE_NAME: 2025-08-14T22:02:48.3752166Z DEVICE_TYPE: 2025-08-14T22:02:48.3752333Z SCHEMA_VERSION: v3 2025-08-14T22:02:48.3752503Z REPO: pytorch/pytorch 2025-08-14T22:02:48.3752690Z HEAD_BRANCH: refs/heads/main 2025-08-14T22:02:48.3753009Z HEAD_SHA: 1fc683cf17c8c673044538d10266c00f92987be2 2025-08-14T22:02:48.3753229Z WORKFLOW_RUN_ID: 16976338999 2025-08-14T22:02:48.3753410Z RUN_ATTEMPT: 1 2025-08-14T22:02:48.3753574Z JOB_ID: 48128261046 2025-08-14T22:02:48.3753926Z JOB_NAME: linux-jammy-cpu-py3.9-gcc11-inductor / test (cpu_inductor_freezing_huggingface, 1, 1, linux.8xlarge.amx) 2025-08-14T22:02:48.3754285Z ##[endgroup] 2025-08-14T22:02:48.3784581Z + python3 /home/ec2-user/actions-runner/_work/_actions/pytorch/test-infra/main/.github/actions/upload-benchmark-results/../../scripts/benchmarks/gather_metadata.py --schema-version v3 --repo pytorch/pytorch --head-branch refs/heads/main --head-sha 1fc683cf17c8c673044538d10266c00f92987be2 --workflow-id 16976338999 --run-attempt 1 --job-id 48128261046 --job-name 'linux-jammy-cpu-py3.9-gcc11-inductor / test (cpu_inductor_freezing_huggingface, 1, 1, linux.8xlarge.amx)' 2025-08-14T22:02:48.4056643Z ##[group]Run set -eux 2025-08-14T22:02:48.4056862Z set -eux 2025-08-14T22:02:48.4057047Z  2025-08-14T22:02:48.4057312Z python3 "${GITHUB_ACTION_PATH}/../../scripts/benchmarks/gather_runners_info.py" 2025-08-14T22:02:48.4062533Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T22:02:48.4062800Z env: 2025-08-14T22:02:48.4062963Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:02:48.4063285Z DOCKER_CONTAINER_ID: a7aa204eccbcf3a0e6886619bb590755eb27ea6b069c8a28798d748638509f93 2025-08-14T22:02:48.4063629Z DEVICE_NAME: 2025-08-14T22:02:48.4063800Z DEVICE_TYPE: 2025-08-14T22:02:48.4063963Z ##[endgroup] 2025-08-14T22:02:48.4088712Z + python3 /home/ec2-user/actions-runner/_work/_actions/pytorch/test-infra/main/.github/actions/upload-benchmark-results/../../scripts/benchmarks/gather_runners_info.py 2025-08-14T22:02:48.4425358Z INFO:root:Fail to import torch to get the device name 2025-08-14T22:02:48.4516922Z ##[group]Run set -eux 2025-08-14T22:02:48.4517127Z set -eux 2025-08-14T22:02:48.4517273Z  2025-08-14T22:02:48.4517467Z # TODO (huydhn): Implement this part 2025-08-14T22:02:48.4517714Z echo "dependencies={}" >> "${GITHUB_OUTPUT}" 2025-08-14T22:02:48.4522385Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T22:02:48.4522634Z env: 2025-08-14T22:02:48.4522797Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:02:48.4523107Z DOCKER_CONTAINER_ID: a7aa204eccbcf3a0e6886619bb590755eb27ea6b069c8a28798d748638509f93 2025-08-14T22:02:48.4523564Z DEVICE_NAME: 2025-08-14T22:02:48.4523741Z DEVICE_TYPE: 2025-08-14T22:02:48.4523915Z ##[endgroup] 2025-08-14T22:02:48.4547563Z + echo 'dependencies={}' 2025-08-14T22:02:48.4564447Z ##[group]Run set -eux 2025-08-14T22:02:48.4564652Z set -eux 2025-08-14T22:02:48.4564816Z  2025-08-14T22:02:48.4565001Z if [[ ! -d "${BENCHMARK_RESULTS_DIR}" ]]; then 2025-08-14T22:02:48.4565290Z  echo "${BENCHMARK_RESULTS_DIR} does not exist, skipping" 2025-08-14T22:02:48.4565601Z  # We don't want the job to fail if the directory doesn't exist 2025-08-14T22:02:48.4565853Z  exit 0 2025-08-14T22:02:48.4566015Z fi 2025-08-14T22:02:48.4566164Z  2025-08-14T22:02:48.4566326Z if [[ "${DRY_RUN}" == "true" ]]; then 2025-08-14T22:02:48.4566628Z  python3 "${GITHUB_ACTION_PATH}/../../scripts/upload_benchmark_results.py" \ 2025-08-14T22:02:48.4566976Z  --benchmark-results-dir "${BENCHMARK_RESULTS_DIR}" \ 2025-08-14T22:02:48.4567260Z  --metadata "${BENCHMARK_METADATA}" \ 2025-08-14T22:02:48.4567491Z  --runners "${RUNNER_INFO}" \ 2025-08-14T22:02:48.4567724Z  --dependencies "${DEPENDENCIES}" \ 2025-08-14T22:02:48.4567940Z  --dry-run 2025-08-14T22:02:48.4568105Z else 2025-08-14T22:02:48.4568352Z  python3 "${GITHUB_ACTION_PATH}/../../scripts/upload_benchmark_results.py" \ 2025-08-14T22:02:48.4568726Z  --benchmark-results-dir "${BENCHMARK_RESULTS_DIR}" \ 2025-08-14T22:02:48.4569067Z  --metadata "${BENCHMARK_METADATA}" \ 2025-08-14T22:02:48.4569288Z  --runners "${RUNNER_INFO}" \ 2025-08-14T22:02:48.4569509Z  --dependencies "${DEPENDENCIES}" 2025-08-14T22:02:48.4569716Z fi 2025-08-14T22:02:48.4574097Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T22:02:48.4574339Z env: 2025-08-14T22:02:48.4574501Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:02:48.4574800Z DOCKER_CONTAINER_ID: a7aa204eccbcf3a0e6886619bb590755eb27ea6b069c8a28798d748638509f93 2025-08-14T22:02:48.4575118Z DEVICE_NAME: 2025-08-14T22:02:48.4575340Z DEVICE_TYPE: 2025-08-14T22:02:48.4575522Z BENCHMARK_RESULTS_DIR: test/test-reports 2025-08-14T22:02:48.4575721Z DRY_RUN: false 2025-08-14T22:02:48.4576563Z BENCHMARK_METADATA: {"timestamp": 1755208968, "schema_version": "v3", "name": "linux-jammy-cpu-py3.9-gcc11-inductor / test (cpu_inductor_freezing_huggingface, 1, 1, linux.8xlarge.amx)", "repo": "pytorch/pytorch", "head_branch": "refs/heads/main", "head_sha": "1fc683cf17c8c673044538d10266c00f92987be2", "workflow_id": 16976338999, "run_attempt": 1, "job_id": 48128261046} 2025-08-14T22:02:48.4577618Z RUNNER_INFO: [{"cpu_info": "x86_64", "cpu_count": 32, "avail_mem_in_gb": 123, "extra_info": {"hostname": "ip-10-0-19-47.ec2.internal"}, "name": "", "type": ""}] 2025-08-14T22:02:48.4578002Z DEPENDENCIES: {} 2025-08-14T22:02:48.4578164Z ##[endgroup] 2025-08-14T22:02:48.4603037Z + [[ ! -d test/test-reports ]] 2025-08-14T22:02:48.4603280Z + [[ false == \t\r\u\e ]] 2025-08-14T22:02:48.4604915Z + python3 /home/ec2-user/actions-runner/_work/_actions/pytorch/test-infra/main/.github/actions/upload-benchmark-results/../../scripts/upload_benchmark_results.py --benchmark-results-dir test/test-reports --metadata '{"timestamp": 1755208968, "schema_version": "v3", "name": "linux-jammy-cpu-py3.9-gcc11-inductor / test (cpu_inductor_freezing_huggingface, 1, 1, linux.8xlarge.amx)", "repo": "pytorch/pytorch", "head_branch": "refs/heads/main", "head_sha": "1fc683cf17c8c673044538d10266c00f92987be2", "workflow_id": 16976338999, "run_attempt": 1, "job_id": 48128261046}' --runners '[{"cpu_info": "x86_64", "cpu_count": 32, "avail_mem_in_gb": 123, "extra_info": {"hostname": "ip-10-0-19-47.ec2.internal"}, "name": "", "type": ""}]' --dependencies '{}' 2025-08-14T22:02:48.5833967Z INFO:root:Upload test/test-reports/inference_huggingface.json to s3://ossci-benchmarks/v3/pytorch/pytorch/16976338999/48128261046/inference_huggingface.json 2025-08-14T22:02:48.6129264Z INFO:botocore.credentials:Found credentials from IAM Role: gh-ci-github-action-runners-runner-role 2025-08-14T22:02:48.8487928Z ##[group]Run cat test/**/*_toprint.log || true 2025-08-14T22:02:48.8488225Z cat test/**/*_toprint.log || true 2025-08-14T22:02:48.8493056Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T22:02:48.8493318Z env: 2025-08-14T22:02:48.8493491Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:02:48.8493803Z DOCKER_CONTAINER_ID: a7aa204eccbcf3a0e6886619bb590755eb27ea6b069c8a28798d748638509f93 2025-08-14T22:02:48.8494157Z DEVICE_NAME: 2025-08-14T22:02:48.8494326Z DEVICE_TYPE: 2025-08-14T22:02:48.8494491Z ##[endgroup] 2025-08-14T22:02:48.8570607Z cat: 'test/**/*_toprint.log': No such file or directory 2025-08-14T22:02:48.8597716Z ##[group]Run kill "$MONITOR_SCRIPT_PID" 2025-08-14T22:02:48.8597979Z kill "$MONITOR_SCRIPT_PID" 2025-08-14T22:02:48.8602663Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T22:02:48.8602914Z env: 2025-08-14T22:02:48.8603076Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:02:48.8603378Z DOCKER_CONTAINER_ID: a7aa204eccbcf3a0e6886619bb590755eb27ea6b069c8a28798d748638509f93 2025-08-14T22:02:48.8603715Z DEVICE_NAME: 2025-08-14T22:02:48.8603878Z DEVICE_TYPE: 2025-08-14T22:02:48.8604048Z MONITOR_SCRIPT_PID: 47836 2025-08-14T22:02:48.8604228Z ##[endgroup] 2025-08-14T22:02:48.8703974Z Prepare all required actions 2025-08-14T22:02:48.8704467Z Getting action download info 2025-08-14T22:02:49.0318952Z Download action repository 'seemethere/upload-artifact-s3@v5' (SHA:baba72d0712b404f646cebe0730933554ebce96a) 2025-08-14T22:02:49.2280890Z Download action repository 'actions/upload-artifact@v4' (SHA:ea165f8d65b6e75b540449e92b4886f43607fa02) 2025-08-14T22:02:49.6764773Z ##[group]Run ./.github/actions/upload-test-artifacts 2025-08-14T22:02:49.6765035Z with: 2025-08-14T22:02:49.6765331Z file-suffix: test-cpu_inductor_freezing_huggingface-1-1-linux.8xlarge.amx_48128261046 2025-08-14T22:02:49.6765663Z s3-bucket: gha-artifacts 2025-08-14T22:02:49.6765859Z env: 2025-08-14T22:02:49.6766026Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:02:49.6766420Z DOCKER_CONTAINER_ID: a7aa204eccbcf3a0e6886619bb590755eb27ea6b069c8a28798d748638509f93 2025-08-14T22:02:49.6766733Z DEVICE_NAME: 2025-08-14T22:02:49.6766894Z DEVICE_TYPE: 2025-08-14T22:02:49.6767041Z ##[endgroup] 2025-08-14T22:02:49.6784138Z ##[group]Run # Remove any previous test jsons if they exist 2025-08-14T22:02:49.6784478Z # Remove any previous test jsons if they exist 2025-08-14T22:02:49.6784749Z rm -f test-jsons-*.zip 2025-08-14T22:02:49.6785055Z zip -r "test-jsons-${FILE_SUFFIX}.zip" test/test-reports -i '*.json' 2025-08-14T22:02:49.6790136Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T22:02:49.6790398Z env: 2025-08-14T22:02:49.6790574Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:02:49.6790892Z DOCKER_CONTAINER_ID: a7aa204eccbcf3a0e6886619bb590755eb27ea6b069c8a28798d748638509f93 2025-08-14T22:02:49.6791200Z DEVICE_NAME: 2025-08-14T22:02:49.6791364Z DEVICE_TYPE: 2025-08-14T22:02:49.6791644Z FILE_SUFFIX: test-cpu_inductor_freezing_huggingface-1-1-linux.8xlarge.amx_48128261046 2025-08-14T22:02:49.6791951Z ##[endgroup] 2025-08-14T22:02:49.7019921Z adding: test/test-reports/inference_huggingface.json (deflated 99%) 2025-08-14T22:02:49.7042080Z ##[group]Run # Remove any previous test reports if they exist 2025-08-14T22:02:49.7042407Z # Remove any previous test reports if they exist 2025-08-14T22:02:49.7042670Z rm -f test-reports-*.zip 2025-08-14T22:02:49.7042966Z zip -r "test-reports-${FILE_SUFFIX}.zip" test/test-reports -i '*.xml' -i '*.csv' 2025-08-14T22:02:49.7047712Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T22:02:49.7047941Z env: 2025-08-14T22:02:49.7048096Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:02:49.7048375Z DOCKER_CONTAINER_ID: a7aa204eccbcf3a0e6886619bb590755eb27ea6b069c8a28798d748638509f93 2025-08-14T22:02:49.7048672Z DEVICE_NAME: 2025-08-14T22:02:49.7048826Z DEVICE_TYPE: 2025-08-14T22:02:49.7049081Z FILE_SUFFIX: test-cpu_inductor_freezing_huggingface-1-1-linux.8xlarge.amx_48128261046 2025-08-14T22:02:49.7049378Z ##[endgroup] 2025-08-14T22:02:49.7101563Z adding: test/test-reports/inference_huggingface.csv (deflated 69%) 2025-08-14T22:02:49.7102090Z adding: test/test-reports/inference_huggingface_graph_breaks.csv (deflated 85%) 2025-08-14T22:02:49.7105710Z adding: test/test-reports/inference_huggingface_graph_break_deduped.csv (deflated 64%) 2025-08-14T22:02:49.7171180Z ##[group]Run # Remove any previous usage logs if they exist 2025-08-14T22:02:49.7171484Z # Remove any previous usage logs if they exist 2025-08-14T22:02:49.7171718Z rm -f logs-*.zip 2025-08-14T22:02:49.7171940Z zip "logs-${FILE_SUFFIX}.zip" 'usage_log.txt' || true 2025-08-14T22:02:49.7172234Z zip -r "logs-${FILE_SUFFIX}.zip" test/test-reports -i '*.log' || true 2025-08-14T22:02:49.7176537Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T22:02:49.7176763Z env: 2025-08-14T22:02:49.7176923Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:02:49.7177218Z DOCKER_CONTAINER_ID: a7aa204eccbcf3a0e6886619bb590755eb27ea6b069c8a28798d748638509f93 2025-08-14T22:02:49.7177524Z DEVICE_NAME: 2025-08-14T22:02:49.7177670Z DEVICE_TYPE: 2025-08-14T22:02:49.7178083Z FILE_SUFFIX: test-cpu_inductor_freezing_huggingface-1-1-linux.8xlarge.amx_48128261046 2025-08-14T22:02:49.7178393Z ##[endgroup] 2025-08-14T22:02:49.7273290Z adding: usage_log.txt (deflated 96%) 2025-08-14T22:02:49.7285528Z 2025-08-14T22:02:49.7286137Z zip error: Nothing to do! (logs-test-cpu_inductor_freezing_huggingface-1-1-linux.8xlarge.amx_48128261046.zip) 2025-08-14T22:02:49.7302431Z ##[group]Run # Remove any previous debugging artifacts if they exist 2025-08-14T22:02:49.7302796Z # Remove any previous debugging artifacts if they exist 2025-08-14T22:02:49.7303060Z rm -f debug-*.zip 2025-08-14T22:02:49.7303256Z if [ -d 'test/debug' ]; then 2025-08-14T22:02:49.7303484Z  zip -r "debug-${FILE_SUFFIX}.zip" test/debug 2025-08-14T22:02:49.7303775Z fi 2025-08-14T22:02:49.7308019Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T22:02:49.7308267Z env: 2025-08-14T22:02:49.7308429Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:02:49.7308734Z DOCKER_CONTAINER_ID: a7aa204eccbcf3a0e6886619bb590755eb27ea6b069c8a28798d748638509f93 2025-08-14T22:02:49.7309056Z DEVICE_NAME: 2025-08-14T22:02:49.7309228Z DEVICE_TYPE: 2025-08-14T22:02:49.7309499Z FILE_SUFFIX: test-cpu_inductor_freezing_huggingface-1-1-linux.8xlarge.amx_48128261046 2025-08-14T22:02:49.7309795Z ##[endgroup] 2025-08-14T22:02:49.7376140Z ##[group]Run seemethere/upload-artifact-s3@v5 2025-08-14T22:02:49.7376369Z with: 2025-08-14T22:02:49.7376538Z s3-bucket: gha-artifacts 2025-08-14T22:02:49.7376761Z s3-prefix: pytorch/pytorch/16976338999/1/artifact 2025-08-14T22:02:49.7376985Z retention-days: 14 2025-08-14T22:02:49.7377163Z if-no-files-found: warn 2025-08-14T22:02:49.7377349Z path: test-jsons-*.zip 2025-08-14T22:02:49.7377554Z name: artifact 2025-08-14T22:02:49.7377715Z region: us-east-1 2025-08-14T22:02:49.7377867Z env: 2025-08-14T22:02:49.7378020Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:02:49.7378329Z DOCKER_CONTAINER_ID: a7aa204eccbcf3a0e6886619bb590755eb27ea6b069c8a28798d748638509f93 2025-08-14T22:02:49.7378629Z DEVICE_NAME: 2025-08-14T22:02:49.7378791Z DEVICE_TYPE: 2025-08-14T22:02:49.7378946Z ##[endgroup] 2025-08-14T22:02:50.0285893Z NOTE: s3-prefix specified, ignoring name parameter 2025-08-14T22:02:50.0288556Z With the provided path, there will be 1 file uploaded 2025-08-14T22:02:50.0288876Z Uploading to s3 prefix: pytorch/pytorch/16976338999/1/artifact 2025-08-14T22:02:50.0320121Z Starting upload of test-jsons-test-cpu_inductor_freezing_huggingface-1-1-linux.8xlarge.amx_48128261046.zip 2025-08-14T22:02:50.1381427Z Finished upload of test-jsons-test-cpu_inductor_freezing_huggingface-1-1-linux.8xlarge.amx_48128261046.zip 2025-08-14T22:02:50.1535039Z ##[group]Run seemethere/upload-artifact-s3@v5 2025-08-14T22:02:50.1535295Z with: 2025-08-14T22:02:50.1535470Z s3-bucket: gha-artifacts 2025-08-14T22:02:50.1535705Z s3-prefix: pytorch/pytorch/16976338999/1/artifact 2025-08-14T22:02:50.1535940Z retention-days: 14 2025-08-14T22:02:50.1536139Z if-no-files-found: error 2025-08-14T22:02:50.1536343Z path: test-reports-*.zip 2025-08-14T22:02:50.1536528Z name: artifact 2025-08-14T22:02:50.1536704Z region: us-east-1 2025-08-14T22:02:50.1536930Z env: 2025-08-14T22:02:50.1537099Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:02:50.1537419Z DOCKER_CONTAINER_ID: a7aa204eccbcf3a0e6886619bb590755eb27ea6b069c8a28798d748638509f93 2025-08-14T22:02:50.1537749Z DEVICE_NAME: 2025-08-14T22:02:50.1537920Z DEVICE_TYPE: 2025-08-14T22:02:50.1538083Z ##[endgroup] 2025-08-14T22:02:50.4144977Z NOTE: s3-prefix specified, ignoring name parameter 2025-08-14T22:02:50.4145343Z With the provided path, there will be 1 file uploaded 2025-08-14T22:02:50.4145639Z Uploading to s3 prefix: pytorch/pytorch/16976338999/1/artifact 2025-08-14T22:02:50.4175757Z Starting upload of test-reports-test-cpu_inductor_freezing_huggingface-1-1-linux.8xlarge.amx_48128261046.zip 2025-08-14T22:02:50.5242520Z Finished upload of test-reports-test-cpu_inductor_freezing_huggingface-1-1-linux.8xlarge.amx_48128261046.zip 2025-08-14T22:02:50.5398416Z ##[group]Run seemethere/upload-artifact-s3@v5 2025-08-14T22:02:50.5398667Z with: 2025-08-14T22:02:50.5398845Z s3-bucket: gha-artifacts 2025-08-14T22:02:50.5399252Z s3-prefix: pytorch/pytorch/16976338999/1/artifact 2025-08-14T22:02:50.5399501Z retention-days: 14 2025-08-14T22:02:50.5399686Z if-no-files-found: ignore 2025-08-14T22:02:50.5399892Z path: logs-*.zip 2025-08-14T22:02:50.5400070Z name: artifact 2025-08-14T22:02:50.5400234Z region: us-east-1 2025-08-14T22:02:50.5400408Z env: 2025-08-14T22:02:50.5400573Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:02:50.5400893Z DOCKER_CONTAINER_ID: a7aa204eccbcf3a0e6886619bb590755eb27ea6b069c8a28798d748638509f93 2025-08-14T22:02:50.5401229Z DEVICE_NAME: 2025-08-14T22:02:50.5401465Z DEVICE_TYPE: 2025-08-14T22:02:50.5401621Z ##[endgroup] 2025-08-14T22:02:50.8080846Z NOTE: s3-prefix specified, ignoring name parameter 2025-08-14T22:02:50.8081199Z With the provided path, there will be 1 file uploaded 2025-08-14T22:02:50.8081559Z Uploading to s3 prefix: pytorch/pytorch/16976338999/1/artifact 2025-08-14T22:02:50.8116785Z Starting upload of logs-test-cpu_inductor_freezing_huggingface-1-1-linux.8xlarge.amx_48128261046.zip 2025-08-14T22:02:50.9301904Z Finished upload of logs-test-cpu_inductor_freezing_huggingface-1-1-linux.8xlarge.amx_48128261046.zip 2025-08-14T22:02:50.9454392Z ##[group]Run seemethere/upload-artifact-s3@v5 2025-08-14T22:02:50.9454648Z with: 2025-08-14T22:02:50.9454827Z s3-bucket: gha-artifacts 2025-08-14T22:02:50.9455065Z s3-prefix: pytorch/pytorch/16976338999/1/artifact 2025-08-14T22:02:50.9455307Z retention-days: 14 2025-08-14T22:02:50.9455500Z if-no-files-found: ignore 2025-08-14T22:02:50.9455700Z path: debug-*.zip 2025-08-14T22:02:50.9455881Z name: artifact 2025-08-14T22:02:50.9456054Z region: us-east-1 2025-08-14T22:02:50.9456225Z env: 2025-08-14T22:02:50.9456380Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:02:50.9456720Z DOCKER_CONTAINER_ID: a7aa204eccbcf3a0e6886619bb590755eb27ea6b069c8a28798d748638509f93 2025-08-14T22:02:50.9457055Z DEVICE_NAME: 2025-08-14T22:02:50.9457227Z DEVICE_TYPE: 2025-08-14T22:02:50.9457416Z ##[endgroup] 2025-08-14T22:02:51.2029692Z No files were found with the provided path: debug-*.zip. No artifacts will be uploaded. 2025-08-14T22:02:51.2207844Z ##[group]Run # shellcheck disable=SC2156 2025-08-14T22:02:51.2208139Z # shellcheck disable=SC2156 2025-08-14T22:02:51.2208530Z find . -iname "core.[1-9]*" -exec docker exec "${DOCKER_CONTAINER_ID}" sh -c "gdb python {} -ex 'bt' -ex 'q'" \; 2025-08-14T22:02:51.2213636Z shell: /usr/bin/bash -e {0} 2025-08-14T22:02:51.2213841Z env: 2025-08-14T22:02:51.2214003Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:02:51.2214329Z DOCKER_CONTAINER_ID: a7aa204eccbcf3a0e6886619bb590755eb27ea6b069c8a28798d748638509f93 2025-08-14T22:02:51.2214674Z DEVICE_NAME: 2025-08-14T22:02:51.2214845Z DEVICE_TYPE: 2025-08-14T22:02:51.2215008Z ##[endgroup] 2025-08-14T22:02:51.4035004Z Prepare all required actions 2025-08-14T22:02:51.4035366Z Getting action download info 2025-08-14T22:02:51.4938103Z ##[group]Run ./.github/actions/upload-utilization-stats 2025-08-14T22:02:51.4938433Z with: 2025-08-14T22:02:51.4938632Z job_id: 48128261046 2025-08-14T22:02:51.4939082Z job_name: linux-jammy-cpu-py3.9-gcc11-inductor / test (cpu_inductor_freezing_huggingface, 1, 1, linux.8xlarge.amx) 2025-08-14T22:02:51.4939706Z workflow_name: inductor-periodic 2025-08-14T22:02:51.4939981Z workflow_run_id: 16976338999 2025-08-14T22:02:51.4940219Z workflow_attempt: 1 2025-08-14T22:02:51.4940423Z env: 2025-08-14T22:02:51.4940619Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:02:51.4941000Z DOCKER_CONTAINER_ID: a7aa204eccbcf3a0e6886619bb590755eb27ea6b069c8a28798d748638509f93 2025-08-14T22:02:51.4941403Z DEVICE_NAME: 2025-08-14T22:02:51.4941597Z DEVICE_TYPE: 2025-08-14T22:02:51.4942133Z ##[endgroup] 2025-08-14T22:02:51.4955097Z ##[group]Run echo "workflow_id: 16976338999" 2025-08-14T22:02:51.4955391Z echo "workflow_id: 16976338999" 2025-08-14T22:02:51.4955625Z echo "workflow_attempt: 1" 2025-08-14T22:02:51.4955861Z echo "workflow_Name: inductor-periodic" 2025-08-14T22:02:51.4956173Z echo "job_id: 48128261046" 2025-08-14T22:02:51.4956588Z echo "job_name: linux-jammy-cpu-py3.9-gcc11-inductor / test (cpu_inductor_freezing_huggingface, 1, 1, linux.8xlarge.amx)" 2025-08-14T22:02:51.4956993Z echo "artifact_prefix: " 2025-08-14T22:02:51.4957212Z python3 --version 2025-08-14T22:02:51.4961979Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T22:02:51.4962242Z env: 2025-08-14T22:02:51.4962408Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:02:51.4962742Z DOCKER_CONTAINER_ID: a7aa204eccbcf3a0e6886619bb590755eb27ea6b069c8a28798d748638509f93 2025-08-14T22:02:51.4963149Z DEVICE_NAME: 2025-08-14T22:02:51.4963320Z DEVICE_TYPE: 2025-08-14T22:02:51.4963505Z ##[endgroup] 2025-08-14T22:02:51.4990664Z workflow_id: 16976338999 2025-08-14T22:02:51.4993354Z workflow_attempt: 1 2025-08-14T22:02:51.4993564Z workflow_Name: inductor-periodic 2025-08-14T22:02:51.4993785Z job_id: 48128261046 2025-08-14T22:02:51.4994182Z job_name: linux-jammy-cpu-py3.9-gcc11-inductor / test (cpu_inductor_freezing_huggingface, 1, 1, linux.8xlarge.amx) 2025-08-14T22:02:51.4994584Z artifact_prefix: 2025-08-14T22:02:51.5003451Z Python 3.9.23 2025-08-14T22:02:51.5031545Z ##[group]Run nick-fields/retry@v3.0.0 2025-08-14T22:02:51.5031781Z with: 2025-08-14T22:02:51.5031952Z shell: bash 2025-08-14T22:02:51.5032131Z timeout_minutes: 5 2025-08-14T22:02:51.5032308Z max_attempts: 5 2025-08-14T22:02:51.5032496Z retry_wait_seconds: 30 2025-08-14T22:02:51.5032887Z command: set -eu python3 -m pip install python-dateutil==2.8.2 boto3==1.35.42 pandas==2.1.3 dataclasses_json==0.6.7 2025-08-14T22:02:51.5033303Z polling_interval_seconds: 1 2025-08-14T22:02:51.5033540Z warning_on_retry: true 2025-08-14T22:02:51.5033758Z continue_on_error: false 2025-08-14T22:02:51.5033955Z env: 2025-08-14T22:02:51.5034115Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:02:51.5034450Z DOCKER_CONTAINER_ID: a7aa204eccbcf3a0e6886619bb590755eb27ea6b069c8a28798d748638509f93 2025-08-14T22:02:51.5034809Z DEVICE_NAME: 2025-08-14T22:02:51.5034983Z DEVICE_TYPE: 2025-08-14T22:02:51.5035148Z ##[endgroup] 2025-08-14T22:02:51.7792134Z Defaulting to user installation because normal site-packages is not writeable 2025-08-14T22:02:51.8433436Z Collecting python-dateutil==2.8.2 2025-08-14T22:02:51.8573430Z Downloading python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB) 2025-08-14T22:02:52.5466258Z Collecting boto3==1.35.42 2025-08-14T22:02:52.5504206Z Downloading boto3-1.35.42-py3-none-any.whl (139 kB) 2025-08-14T22:02:52.9108187Z Collecting pandas==2.1.3 2025-08-14T22:02:52.9175304Z Downloading pandas-2.1.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (12.3 MB) 2025-08-14T22:02:53.0283967Z Requirement already satisfied: dataclasses_json==0.6.7 in /home/ec2-user/.local/lib/python3.9/site-packages (0.6.7) 2025-08-14T22:02:53.0294183Z Requirement already satisfied: six>=1.5 in /usr/lib/python3.9/site-packages (from python-dateutil==2.8.2) (1.15.0) 2025-08-14T22:02:53.0331123Z Requirement already satisfied: botocore<1.36.0,>=1.35.42 in /home/ec2-user/.local/lib/python3.9/site-packages (from boto3==1.35.42) (1.35.99) 2025-08-14T22:02:53.0332145Z Requirement already satisfied: jmespath<2.0.0,>=0.7.1 in /usr/lib/python3.9/site-packages (from boto3==1.35.42) (0.10.0) 2025-08-14T22:02:53.0332727Z Requirement already satisfied: s3transfer<0.11.0,>=0.10.0 in /home/ec2-user/.local/lib/python3.9/site-packages (from boto3==1.35.42) (0.10.4) 2025-08-14T22:02:53.1010912Z Collecting tzdata>=2022.1 2025-08-14T22:02:53.1049508Z Downloading tzdata-2025.2-py2.py3-none-any.whl (347 kB) 2025-08-14T22:02:53.7064213Z Collecting numpy<2,>=1.22.4 2025-08-14T22:02:53.7105705Z Downloading numpy-1.26.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (18.2 MB) 2025-08-14T22:02:53.8356387Z Requirement already satisfied: pytz>=2020.1 in /usr/lib/python3.9/site-packages (from pandas==2.1.3) (2022.7.1) 2025-08-14T22:02:53.8376695Z Requirement already satisfied: marshmallow<4.0.0,>=3.18.0 in /home/ec2-user/.local/lib/python3.9/site-packages (from dataclasses_json==0.6.7) (3.26.1) 2025-08-14T22:02:53.8378178Z Requirement already satisfied: typing-inspect<1,>=0.4.0 in /home/ec2-user/.local/lib/python3.9/site-packages (from dataclasses_json==0.6.7) (0.9.0) 2025-08-14T22:02:53.8429329Z Requirement already satisfied: urllib3<1.27,>=1.25.4 in /usr/lib/python3.9/site-packages (from botocore<1.36.0,>=1.35.42->boto3==1.35.42) (1.25.10) 2025-08-14T22:02:53.8519810Z Requirement already satisfied: packaging>=17.0 in /home/ec2-user/.local/lib/python3.9/site-packages (from marshmallow<4.0.0,>=3.18.0->dataclasses_json==0.6.7) (25.0) 2025-08-14T22:02:53.8604363Z Requirement already satisfied: typing-extensions>=3.7.4 in /home/ec2-user/.local/lib/python3.9/site-packages (from typing-inspect<1,>=0.4.0->dataclasses_json==0.6.7) (4.14.1) 2025-08-14T22:02:53.8606006Z Requirement already satisfied: mypy-extensions>=0.3.0 in /home/ec2-user/.local/lib/python3.9/site-packages (from typing-inspect<1,>=0.4.0->dataclasses_json==0.6.7) (1.1.0) 2025-08-14T22:02:53.9843503Z Installing collected packages: python-dateutil, tzdata, numpy, pandas, boto3 2025-08-14T22:02:58.0377216Z Attempting uninstall: boto3 2025-08-14T22:02:58.0377545Z Found existing installation: boto3 1.35.33 2025-08-14T22:02:58.0448404Z Uninstalling boto3-1.35.33: 2025-08-14T22:02:58.0457279Z Successfully uninstalled boto3-1.35.33 2025-08-14T22:02:58.0922073Z Successfully installed boto3-1.35.42 numpy-1.26.4 pandas-2.1.3 python-dateutil-2.8.2 tzdata-2025.2 2025-08-14T22:02:58.5781749Z Command completed after 1 attempt(s). 2025-08-14T22:02:58.5835822Z ##[group]Run python3 -m tools.stats.upload_utilization_stats.upload_utilization_stats \ 2025-08-14T22:02:58.5836333Z python3 -m tools.stats.upload_utilization_stats.upload_utilization_stats \ 2025-08-14T22:02:58.5836674Z  --workflow-run-id "16976338999" \ 2025-08-14T22:02:58.5836942Z  --workflow-name "inductor-periodic" \ 2025-08-14T22:02:58.5837208Z  --workflow-run-attempt "1" \ 2025-08-14T22:02:58.5837449Z  --job-id "48128261046" \ 2025-08-14T22:02:58.5837873Z  --job-name "linux-jammy-cpu-py3.9-gcc11-inductor / test (cpu_inductor_freezing_huggingface, 1, 1, linux.8xlarge.amx)" \ 2025-08-14T22:02:58.5838292Z  --local-path "" \ 2025-08-14T22:02:58.5838507Z  --artifact-prefix "" 2025-08-14T22:02:58.5843719Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T22:02:58.5843988Z env: 2025-08-14T22:02:58.5844161Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:02:58.5844477Z DOCKER_CONTAINER_ID: a7aa204eccbcf3a0e6886619bb590755eb27ea6b069c8a28798d748638509f93 2025-08-14T22:02:58.5844820Z DEVICE_NAME: 2025-08-14T22:02:58.5844993Z DEVICE_TYPE: 2025-08-14T22:02:58.5845160Z ##[endgroup] 2025-08-14T22:02:59.5007743Z repo: pytorch/pytorch 2025-08-14T22:02:59.5009193Z Search for test log in s3 bucket: ossci-utilization 2025-08-14T22:02:59.5009671Z Downloading logs-test-cpu_inductor_freezing_huggingface-1-1-linux.8xlarge.amx_48128261046.zip 2025-08-14T22:02:59.5013568Z extracting usage_log.txt from zip file logs-test-cpu_inductor_freezing_huggingface-1-1-linux.8xlarge.amx_48128261046.zip 2025-08-14T22:02:59.5014168Z Converted Log Model: UtilizationMetadata: 2025-08-14T22:02:59.5015215Z UtilizationMetadata(level='metadata', workflow_id='16976338999', job_id='48128261046', workflow_name='inductor-periodic', job_name='linux-jammy-cpu-py3.9-gcc11-inductor / test (cpu_inductor_freezing_huggingface, 1, 1, linux.8xlarge.amx)', usage_collect_interval=1.0, data_model_version=1.5, start_at=1755207308, gpu_count=0, cpu_count=32, gpu_type=None, error=None) 2025-08-14T22:02:59.5016227Z [Db Segments] detected pytest cmd: 10, generated segments: 10 2025-08-14T22:02:59.5016483Z [db model] Peek db timeseries 2025-08-14T22:02:59.5016678Z :{ 2025-08-14T22:02:59.5016830Z "created_at": 1755208979, 2025-08-14T22:02:59.5017020Z "type": "utilization", 2025-08-14T22:02:59.5017188Z "tags": [ 2025-08-14T22:02:59.5017339Z "record" 2025-08-14T22:02:59.5017697Z ], 2025-08-14T22:02:59.5017840Z "time_stamp": 1755207308, 2025-08-14T22:02:59.5018031Z "repo": "pytorch/pytorch", 2025-08-14T22:02:59.5018220Z "workflow_id": 16976338999, 2025-08-14T22:02:59.5018427Z "run_attempt": 1, 2025-08-14T22:02:59.5018599Z "job_id": 48128261046, 2025-08-14T22:02:59.5018792Z "workflow_name": "inductor-periodic", 2025-08-14T22:02:59.5019165Z "job_name": "linux-jammy-cpu-py3.9-gcc11-inductor / test (cpu_inductor_freezing_huggingface, 1, 1, linux.8xlarge.amx)", 2025-08-14T22:02:59.5019642Z "json_data": "{}" 2025-08-14T22:02:59.5019896Z } 2025-08-14T22:02:59.5020209Z Writing 1 documents to S3 ossci-utilization/util_metadata/v_1.5/pytorch/pytorch/16976338999/1/48128261046/metadata 2025-08-14T22:02:59.5020775Z Done! Finish writing document to S3 ossci-utilization/util_metadata/v_1.5/pytorch/pytorch/16976338999/1/48128261046/metadata 2025-08-14T22:02:59.5021340Z Writing 329 documents to S3 ossci-utilization/util_timeseries/v_1.5/pytorch/pytorch/16976338999/1/48128261046/time_series 2025-08-14T22:02:59.5021905Z Done! Finish writing document to S3 ossci-utilization/util_timeseries/v_1.5/pytorch/pytorch/16976338999/1/48128261046/time_series 2025-08-14T22:02:59.6159681Z ##[group]Run pytorch/test-infra/.github/actions/teardown-linux@main 2025-08-14T22:02:59.6160018Z with: 2025-08-14T22:02:59.6160194Z env: 2025-08-14T22:02:59.6160376Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:02:59.6160731Z DOCKER_CONTAINER_ID: a7aa204eccbcf3a0e6886619bb590755eb27ea6b069c8a28798d748638509f93 2025-08-14T22:02:59.6161064Z DEVICE_NAME: 2025-08-14T22:02:59.6161240Z DEVICE_TYPE: 2025-08-14T22:02:59.6161403Z ##[endgroup] 2025-08-14T22:02:59.6172798Z ##[group]Run set -eou pipefail 2025-08-14T22:02:59.6173044Z set -eou pipefail 2025-08-14T22:02:59.6173238Z  2025-08-14T22:02:59.6173489Z echo "Holding runner for 2 hours until all ssh sessions have logged out" 2025-08-14T22:02:59.6173799Z for _ in $(seq 1440); do 2025-08-14T22:02:59.6174051Z  # Break if no ssh session exists anymore 2025-08-14T22:02:59.6174293Z  if [ "$(who)" = "" ]; then 2025-08-14T22:02:59.6174493Z  break 2025-08-14T22:02:59.6174703Z  fi 2025-08-14T22:02:59.6174865Z  echo "." 2025-08-14T22:02:59.6175041Z  sleep 5 2025-08-14T22:02:59.6175214Z done 2025-08-14T22:02:59.6180409Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T22:02:59.6180716Z env: 2025-08-14T22:02:59.6180912Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:02:59.6181289Z DOCKER_CONTAINER_ID: a7aa204eccbcf3a0e6886619bb590755eb27ea6b069c8a28798d748638509f93 2025-08-14T22:02:59.6181674Z DEVICE_NAME: 2025-08-14T22:02:59.6181867Z DEVICE_TYPE: 2025-08-14T22:02:59.6182060Z ##[endgroup] 2025-08-14T22:02:59.6203906Z Holding runner for 2 hours until all ssh sessions have logged out 2025-08-14T22:02:59.6284058Z ##[group]Run # ignore expansion of "docker ps -q" since it could be empty 2025-08-14T22:02:59.6284670Z # ignore expansion of "docker ps -q" since it could be empty 2025-08-14T22:02:59.6284934Z # shellcheck disable=SC2046 2025-08-14T22:02:59.6285169Z docker stop $(docker ps -q) || true 2025-08-14T22:02:59.6285411Z # Prune all of the docker images 2025-08-14T22:02:59.6285626Z docker system prune -af 2025-08-14T22:02:59.6290154Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T22:02:59.6290402Z env: 2025-08-14T22:02:59.6290561Z GIT_DEFAULT_BRANCH: main 2025-08-14T22:02:59.6290864Z DOCKER_CONTAINER_ID: a7aa204eccbcf3a0e6886619bb590755eb27ea6b069c8a28798d748638509f93 2025-08-14T22:02:59.6291172Z DEVICE_NAME: 2025-08-14T22:02:59.6291328Z DEVICE_TYPE: 2025-08-14T22:02:59.6291474Z ##[endgroup] 2025-08-14T22:03:10.6713415Z a7aa204eccbc 2025-08-14T22:03:10.9694457Z Deleted Containers: 2025-08-14T22:03:10.9697360Z a7aa204eccbcf3a0e6886619bb590755eb27ea6b069c8a28798d748638509f93 2025-08-14T22:03:10.9697995Z 2025-08-14T22:03:18.2672754Z Deleted Images: 2025-08-14T22:03:18.2673432Z untagged: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3.9-gcc11-inductor-benchmarks-bfa89110622ba7202628e9faac705f183070defe 2025-08-14T22:03:18.2679434Z untagged: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image@sha256:4236794baba289041d240d08fd393bbd57497c3012e5e0ccd9fd98f61ebf35c6 2025-08-14T22:03:18.2680221Z deleted: sha256:0899ae453036ee7a91795ea95b1db61000579eeb74b140edab5976919ee64bbe 2025-08-14T22:03:18.2680774Z deleted: sha256:aa7b544271e9ba3105dabd1afb12e315887018f3471e03135c1d50e64cc550c4 2025-08-14T22:03:18.2681623Z deleted: sha256:4c685831817cc2fc6dfdfda1726df1f402222d8cdccc40daad3198cf8b17e3f4 2025-08-14T22:03:18.2682100Z deleted: sha256:cedf3fb09a62e68c6d7e22cedbce12e77166a50649d0269200ee0efce8a57b88 2025-08-14T22:03:18.2682559Z deleted: sha256:1b3a9a237b4153f8f523a85cead9d36e29717eb57182e2f75069788681627d95 2025-08-14T22:03:18.2683008Z deleted: sha256:67bd313103dfbe7fe0172e6f4f7ee420fad9743a64a1cc1cd20bc22250d3602c 2025-08-14T22:03:18.2683509Z deleted: sha256:b17820137ada46a2a726c67aa08cce73d2ead7c95db08575cf5e69bedb4b600d 2025-08-14T22:03:18.2684134Z deleted: sha256:b16c9bc40cc1cf924638323aece4168d6332cfae212dad2a431a584a44fe967c 2025-08-14T22:03:18.2684530Z deleted: sha256:ab35ed781133eb4aaa1b2478aea73fb80dc71bceffbe474b55e1a60fc6c5ffbe 2025-08-14T22:03:18.2684908Z deleted: sha256:b9d0b0720dd9c0bcb4f174ae6770a7c2fe540c6983872180f3a5e18300434cdb 2025-08-14T22:03:18.2685281Z deleted: sha256:f5d1a4f32d90030cc174d73b579758d28f95c992a8cf21360e5addee99dea169 2025-08-14T22:03:18.2685673Z deleted: sha256:4af408141f8591f4b69cef9b425b6caa3c4cbc62ced38b5d08f3150f0c8ff449 2025-08-14T22:03:18.2686053Z deleted: sha256:e0019e5c461051e54a9af37ae22b49cfd2c2e5366da57a20304f6ef89171a3b3 2025-08-14T22:03:18.2686438Z deleted: sha256:542f999b2cfc965b97861645356840864e9946fa2fa40f1f5c4c45684e91c239 2025-08-14T22:03:18.2686804Z deleted: sha256:633629aa3d4ae6472e222a1c0b2ceb729b0d84ccb48e12d52ba2d2987c9063e1 2025-08-14T22:03:18.2687173Z deleted: sha256:ea645aba1ba54baac43713f3df7f1b89dd119764a747273897eb2931fea42856 2025-08-14T22:03:18.2687550Z deleted: sha256:1f50e367efff88c7182b9dc3ff618c1cf7bd34edf2f31805e268c50fac02a627 2025-08-14T22:03:18.2687932Z deleted: sha256:aff22d7ae43d842befa617e2e5f9878d09a82b67c362b0c44a40a4c88be92120 2025-08-14T22:03:18.2688306Z deleted: sha256:4275d4addb77b473ed40194e42918cf2aeb484d1d8e25cf54d374392643a095c 2025-08-14T22:03:18.2688674Z deleted: sha256:66471f6c8dc869455ff193909110d824b5d65f7383877a7d0face6331b21fff3 2025-08-14T22:03:18.2689040Z deleted: sha256:8cfd2d55570494ff2b993725f5eb13d0440a5698fa905823ca1677d2d16febb8 2025-08-14T22:03:18.2689416Z deleted: sha256:5c8cf8b9c4a76f679994decc8800bc6eefd258a8dc6293a714d5e100fea3a1bc 2025-08-14T22:03:18.2689790Z deleted: sha256:1acc162c6b9de62d13ce7fd33bb9b134458f7e7dbe996e5442e0047ec8f70c80 2025-08-14T22:03:18.2690173Z deleted: sha256:044bab98f3bceb1948c626ce6bdd19d3ec8f9c5ad42a4f635dd685a7ae9c9024 2025-08-14T22:03:18.2690557Z deleted: sha256:2acb11a9448f13c2c2d29c4d0d4013e046862bd019cf5ec9fe04bdf35299f1dd 2025-08-14T22:03:18.2690927Z deleted: sha256:8e7b56334416233f301944000dec16952e13bb69296cc80e1031bfecaf6e7f9d 2025-08-14T22:03:18.2691300Z deleted: sha256:4a4d1ec727c43389a601aefccdaeff6b3bf54c0daefb12e0c2098c3e18b383ba 2025-08-14T22:03:18.2691669Z deleted: sha256:8b9ca4276331196a2f03c2fa3a87422d2042cf06011b49368c2335be7da829c1 2025-08-14T22:03:18.2692028Z deleted: sha256:5076357fd3cc8b06ed54a0f692362a38f1ebafa4843c0b0bf8021f9021d2e583 2025-08-14T22:03:18.2692395Z deleted: sha256:f9451fa0842798e2a67c059fda5124cafb401801bb8c40d03ae736ff3ef5ed20 2025-08-14T22:03:18.2692766Z deleted: sha256:52b716f02091d6af6b79e7b2e1f5bbd7391235993d415c7a852d6752220c8b65 2025-08-14T22:03:18.2693130Z deleted: sha256:748225161c361d3779c96eb7ae5ea0c33d35311f9445c371d62616b98e3426e8 2025-08-14T22:03:18.2693499Z deleted: sha256:5eeda1478a46d8d58267e8917422eb0a182a40c8bdfb4bfe0869923f8114c770 2025-08-14T22:03:18.2693864Z deleted: sha256:66d4cebb04304f556dd191b425a876f7dbbcde8c3c647af4ef47c10804e51f5a 2025-08-14T22:03:18.2694291Z deleted: sha256:0b526447174d22890be2bc866228e40989483b1102a0430b4ab3ad16dc6c7787 2025-08-14T22:03:18.2694667Z deleted: sha256:1aa31d55f8f9bb51f1eb702ba7d46ceda8290ed90e8e8cf299bb8a9179bf2ae2 2025-08-14T22:03:18.2695042Z deleted: sha256:dd1f47c8dc7518f303a91fc8aae81a512caff53987d5a89a378bb24c1c6d7707 2025-08-14T22:03:18.2695473Z deleted: sha256:d60f9527fcb284e73795a37d4f536badd451a2eade4c9314ebe549d31efcc876 2025-08-14T22:03:18.2695844Z deleted: sha256:f23ad0355704751b0f71a8900169354e3bf23a7b3f5fa2cd9b2478a561bfbb45 2025-08-14T22:03:18.2696253Z deleted: sha256:10e7acf6460743fcad0c1fff0bbd01158fbeb88151621c1e15ae5994f1c8ef55 2025-08-14T22:03:18.2696638Z deleted: sha256:f674e3067e97f1407f4cd55202d4c0c8641f02811550e65a00a875fc19354b75 2025-08-14T22:03:18.2697087Z deleted: sha256:8a9c75c896425ccd25101f0cf39316bec7779111954f44df726842bf583e907b 2025-08-14T22:03:18.2697465Z deleted: sha256:9730d30edfcaa135287479d80f1720b39c6f728228df6d0eb7f095e917cc16b6 2025-08-14T22:03:18.2697911Z deleted: sha256:2787e13cf97e870ca65312526c3000163ebf3da20fe59e5f5d53b1aeb4fb424b 2025-08-14T22:03:18.2698375Z deleted: sha256:d61197909174795bd69f8d5f534f1b086065d36b7aa6c5a50744eca6f8d6b12b 2025-08-14T22:03:18.2698790Z deleted: sha256:ecdfbb81e95b2ae2c8e9ab4ca72ba8564095caabb0512a47da8f866923f71bff 2025-08-14T22:03:18.2699241Z deleted: sha256:cd2d7c644df243742a0c0349af0d37570c06fdd1711ddc367e79514757a6d5cc 2025-08-14T22:03:18.2699856Z deleted: sha256:6703ab1ced70b30a87660c0dd778fe95fb90b04ed8461c2a331272aa54eb3499 2025-08-14T22:03:18.2700304Z deleted: sha256:b7088ce49d7df1d6fb18eee5fc5664e637c5649c89e581d972c76a83f60d0a62 2025-08-14T22:03:18.2700742Z deleted: sha256:d0d2786658af9907d8c4ecfa84fa9e2bd07131257264395b804deef744a5c39c 2025-08-14T22:03:18.2701177Z deleted: sha256:d46baf72d8e570e6004c6f95131cea6ede27eb01c213d8c1e8b263ab95fdfe95 2025-08-14T22:03:18.2701588Z deleted: sha256:0219ea0bd0e38d169ed596ed80807b0f70b609ec5f886d671c249d10575dff2c 2025-08-14T22:03:18.2701997Z deleted: sha256:77d1a1f15cf8ae85a4c5495d800378c307967004360814810fd13b07a74aee5e 2025-08-14T22:03:18.2702398Z deleted: sha256:47c77d89ce8782a94a6f5435b1611a76b47f830153ba4b462d3e08dcbdaa40f7 2025-08-14T22:03:18.2702817Z deleted: sha256:d5120b2e61fb0ccc32a2ad02fc0b2b908bc69f1f174268bde3d26d79ce46f046 2025-08-14T22:03:18.2703231Z deleted: sha256:65626052fd7e03a8e90c72072a54f0eaa43788cfcb0835ffb98b700be89b0567 2025-08-14T22:03:18.2703633Z deleted: sha256:05c09c0832c35f0128e0258b1d3069d7bb4b94ce58239faba5d585e49c34e904 2025-08-14T22:03:18.2704043Z deleted: sha256:2d6749fb2c30585eebb1d97e99318434ec34e0f7a4414e552fd4a44175f86839 2025-08-14T22:03:18.2704451Z deleted: sha256:2d65e2932810021e5b3cfedd89cfd851dd47fce63fbe5dc6959e59f3d8a98499 2025-08-14T22:03:18.2704881Z deleted: sha256:b2e71ddacad35b6caa3a77429bab51b654f6acaccc9e9263f1cb43edb8c53ac3 2025-08-14T22:03:18.2705305Z deleted: sha256:632a43100a629c40972b4da95fbbb581f29fe8b073a96386c72931d27ffbbefa 2025-08-14T22:03:18.2705714Z deleted: sha256:11964e5f5833fdf2bcc61c52f33d5aebf9b5504c6792baf58beb96b90398d10a 2025-08-14T22:03:18.2706121Z deleted: sha256:f0c1cb4c9e4655464b9b62b6589ac5005c2392213765ab4175bd61e3f6462643 2025-08-14T22:03:18.2706534Z deleted: sha256:5113aaee4b4d5ee45b58bcee467ac314112b02e4c4e5e9c3cc7a236dd308e9de 2025-08-14T22:03:18.2706956Z deleted: sha256:9cdc88c7b7fe728e15c72d0e8eef813ace31905b4b317a0a23f1334b6a22e604 2025-08-14T22:03:18.2707365Z deleted: sha256:8056a3da01752a91095e2d0afd80b625172f0915f22f7d998b9b926b9462dc5f 2025-08-14T22:03:18.2707757Z deleted: sha256:8a99968112e0edd39c242f3452b05d167911724468fdd9b18d11a8f5fa9c3ac8 2025-08-14T22:03:18.2708172Z deleted: sha256:6f70653bcfea9c1dd39aba76713adac0ac8f6f4c202387ff86a3ffe45d2079f2 2025-08-14T22:03:18.2708592Z deleted: sha256:9a0ed45f26188ecbfcf7658f46e29922b441969b2aded64d1d6b287b6de2e49c 2025-08-14T22:03:18.2708992Z deleted: sha256:f84c75780b110e68f7593fe9592456387118761b365a954a105aee72016adeac 2025-08-14T22:03:18.2709383Z deleted: sha256:1a5a81f8cbb945eee96e25ee8b4958d7140bb6751b86bc2e4a6aa9e18a16846c 2025-08-14T22:03:18.2709808Z deleted: sha256:7e072dc6aa8c1831ddc97ba8229235081976cb8036c06ee1320b33606e03f9a4 2025-08-14T22:03:18.2710186Z deleted: sha256:369af3627df8ecb48c51ea4fd3267e561b2f6821075ddce314e9485494447f16 2025-08-14T22:03:18.2710565Z deleted: sha256:4d49b99f2eee0f82788e33a9c771f75b1411b0b70ce47771fc1b3bc160f23961 2025-08-14T22:03:18.2710944Z deleted: sha256:fe04dcb9c711f36f9ed1df5b2d0854d30dc5abaa6e6cd493b85d4c2e2d2c3e1b 2025-08-14T22:03:18.2711325Z deleted: sha256:4800771a0435c52d6e480540ffa8a65ecc51fdc82a91302c1a373e6021bc37ca 2025-08-14T22:03:18.2711717Z deleted: sha256:90a2bf02e851326fc70d05470553ed33e578342d6e06bfa0cfaf331c4079b7e4 2025-08-14T22:03:18.2711945Z 2025-08-14T22:03:18.2712037Z Total reclaimed space: 51.8GB 2025-08-14T22:03:18.2785626Z Post job cleanup. 2025-08-14T22:03:18.2814532Z Post job cleanup. 2025-08-14T22:03:18.3614899Z [command]/usr/bin/git version 2025-08-14T22:03:18.3656918Z git version 2.47.1 2025-08-14T22:03:18.3695260Z Copying '/home/ec2-user/.gitconfig' to '/home/ec2-user/actions-runner/_work/_temp/a38a19df-f1e7-4c13-b235-cf23f122f093/.gitconfig' 2025-08-14T22:03:18.3709994Z Temporarily overriding HOME='/home/ec2-user/actions-runner/_work/_temp/a38a19df-f1e7-4c13-b235-cf23f122f093' before making global git config changes 2025-08-14T22:03:18.3715308Z Adding repository directory to the temporary git global config as a safe directory 2025-08-14T22:03:18.3716093Z [command]/usr/bin/git config --global --add safe.directory /home/ec2-user/actions-runner/_work/pytorch/pytorch 2025-08-14T22:03:18.3780191Z [command]/usr/bin/git config --local --name-only --get-regexp core\.sshCommand 2025-08-14T22:03:18.3816108Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'core\.sshCommand' && git config --local --unset-all 'core.sshCommand' || :" 2025-08-14T22:03:18.4159084Z Entering 'android/libs/fbjni' 2025-08-14T22:03:18.4216382Z Entering 'third_party/FP16' 2025-08-14T22:03:18.4274692Z Entering 'third_party/FXdiv' 2025-08-14T22:03:18.4329698Z Entering 'third_party/NNPACK' 2025-08-14T22:03:18.4387389Z Entering 'third_party/NVTX' 2025-08-14T22:03:18.4452577Z Entering 'third_party/VulkanMemoryAllocator' 2025-08-14T22:03:18.4504811Z Entering 'third_party/XNNPACK' 2025-08-14T22:03:18.4577114Z Entering 'third_party/aiter' 2025-08-14T22:03:18.4636128Z Entering 'third_party/aiter/3rdparty/composable_kernel' 2025-08-14T22:03:18.4700478Z Entering 'third_party/benchmark' 2025-08-14T22:03:18.4759702Z Entering 'third_party/composable_kernel' 2025-08-14T22:03:18.4821182Z Entering 'third_party/cpp-httplib' 2025-08-14T22:03:18.4885096Z Entering 'third_party/cpuinfo' 2025-08-14T22:03:18.4934476Z Entering 'third_party/cudnn_frontend' 2025-08-14T22:03:18.5002496Z Entering 'third_party/cutlass' 2025-08-14T22:03:18.5063787Z Entering 'third_party/fbgemm' 2025-08-14T22:03:18.5121857Z Entering 'third_party/fbgemm/external/asmjit' 2025-08-14T22:03:18.5178387Z Entering 'third_party/fbgemm/external/composable_kernel' 2025-08-14T22:03:18.5235807Z Entering 'third_party/fbgemm/external/cpuinfo' 2025-08-14T22:03:18.5296071Z Entering 'third_party/fbgemm/external/cutlass' 2025-08-14T22:03:18.5356916Z Entering 'third_party/fbgemm/external/googletest' 2025-08-14T22:03:18.5411364Z Entering 'third_party/fbgemm/external/hipify_torch' 2025-08-14T22:03:18.5470070Z Entering 'third_party/fbgemm/external/json' 2025-08-14T22:03:18.5530637Z Entering 'third_party/flash-attention' 2025-08-14T22:03:18.5590011Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-08-14T22:03:18.5653717Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-08-14T22:03:18.5716319Z Entering 'third_party/flatbuffers' 2025-08-14T22:03:18.5777693Z Entering 'third_party/fmt' 2025-08-14T22:03:18.5834019Z Entering 'third_party/gemmlowp/gemmlowp' 2025-08-14T22:03:18.5890528Z Entering 'third_party/gloo' 2025-08-14T22:03:18.5947899Z Entering 'third_party/googletest' 2025-08-14T22:03:18.6003484Z Entering 'third_party/ideep' 2025-08-14T22:03:18.6060764Z Entering 'third_party/ideep/mkl-dnn' 2025-08-14T22:03:18.6124180Z Entering 'third_party/ittapi' 2025-08-14T22:03:18.6186132Z Entering 'third_party/kineto' 2025-08-14T22:03:18.6238133Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-08-14T22:03:18.6296784Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-08-14T22:03:18.6358142Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-08-14T22:03:18.6417644Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-08-14T22:03:18.6479426Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-08-14T22:03:18.6529999Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-08-14T22:03:18.6596536Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-08-14T22:03:18.6653390Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-08-14T22:03:18.6709856Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-08-14T22:03:18.6764007Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-08-14T22:03:18.6822920Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-08-14T22:03:18.6880909Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-08-14T22:03:18.6946282Z Entering 'third_party/kleidiai' 2025-08-14T22:03:18.6999726Z Entering 'third_party/mimalloc' 2025-08-14T22:03:18.7058639Z Entering 'third_party/nlohmann' 2025-08-14T22:03:18.7121268Z Entering 'third_party/onnx' 2025-08-14T22:03:18.7196094Z Entering 'third_party/onnx/third_party/pybind11' 2025-08-14T22:03:18.7256212Z Entering 'third_party/opentelemetry-cpp' 2025-08-14T22:03:18.7314444Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-08-14T22:03:18.7373391Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-08-14T22:03:18.7427644Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-08-14T22:03:18.7488124Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-08-14T22:03:18.7546838Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-08-14T22:03:18.7608216Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-08-14T22:03:18.7662295Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-08-14T22:03:18.7719893Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-08-14T22:03:18.7775383Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-08-14T22:03:18.7833917Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-08-14T22:03:18.7906032Z Entering 'third_party/pocketfft' 2025-08-14T22:03:18.7962766Z Entering 'third_party/protobuf' 2025-08-14T22:03:18.8024692Z Entering 'third_party/protobuf/third_party/benchmark' 2025-08-14T22:03:18.8081855Z Entering 'third_party/protobuf/third_party/googletest' 2025-08-14T22:03:18.8140799Z Entering 'third_party/psimd' 2025-08-14T22:03:18.8202456Z Entering 'third_party/pthreadpool' 2025-08-14T22:03:18.8258524Z Entering 'third_party/pybind11' 2025-08-14T22:03:18.8321089Z Entering 'third_party/python-peachpy' 2025-08-14T22:03:18.8381032Z Entering 'third_party/sleef' 2025-08-14T22:03:18.8440115Z Entering 'third_party/tensorpipe' 2025-08-14T22:03:18.8494962Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-08-14T22:03:18.8553344Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-08-14T22:03:18.8610068Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-08-14T22:03:18.8660973Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-08-14T22:03:18.8715269Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-08-14T22:03:18.8807446Z [command]/usr/bin/git config --local --name-only --get-regexp http\.https\:\/\/github\.com\/\.extraheader 2025-08-14T22:03:18.8829176Z http.https://github.com/.extraheader 2025-08-14T22:03:18.8836940Z [command]/usr/bin/git config --local --unset-all http.https://github.com/.extraheader 2025-08-14T22:03:18.8876017Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'http\.https\:\/\/github\.com\/\.extraheader' && git config --local --unset-all 'http.https://github.com/.extraheader' || :" 2025-08-14T22:03:18.9207693Z Entering 'android/libs/fbjni' 2025-08-14T22:03:18.9243441Z http.https://github.com/.extraheader 2025-08-14T22:03:18.9281496Z Entering 'third_party/FP16' 2025-08-14T22:03:18.9317411Z http.https://github.com/.extraheader 2025-08-14T22:03:18.9352886Z Entering 'third_party/FXdiv' 2025-08-14T22:03:18.9386282Z http.https://github.com/.extraheader 2025-08-14T22:03:18.9422566Z Entering 'third_party/NNPACK' 2025-08-14T22:03:18.9460593Z http.https://github.com/.extraheader 2025-08-14T22:03:18.9498783Z Entering 'third_party/NVTX' 2025-08-14T22:03:18.9537813Z http.https://github.com/.extraheader 2025-08-14T22:03:18.9585197Z Entering 'third_party/VulkanMemoryAllocator' 2025-08-14T22:03:18.9619990Z http.https://github.com/.extraheader 2025-08-14T22:03:18.9658032Z Entering 'third_party/XNNPACK' 2025-08-14T22:03:18.9700887Z http.https://github.com/.extraheader 2025-08-14T22:03:18.9754619Z Entering 'third_party/aiter' 2025-08-14T22:03:18.9789482Z http.https://github.com/.extraheader 2025-08-14T22:03:18.9827644Z Entering 'third_party/aiter/3rdparty/composable_kernel' 2025-08-14T22:03:18.9866034Z http.https://github.com/.extraheader 2025-08-14T22:03:18.9917319Z Entering 'third_party/benchmark' 2025-08-14T22:03:18.9953628Z http.https://github.com/.extraheader 2025-08-14T22:03:18.9992286Z Entering 'third_party/composable_kernel' 2025-08-14T22:03:19.0031532Z http.https://github.com/.extraheader 2025-08-14T22:03:19.0078946Z Entering 'third_party/cpp-httplib' 2025-08-14T22:03:19.0114377Z http.https://github.com/.extraheader 2025-08-14T22:03:19.0151926Z Entering 'third_party/cpuinfo' 2025-08-14T22:03:19.0188604Z http.https://github.com/.extraheader 2025-08-14T22:03:19.0230016Z Entering 'third_party/cudnn_frontend' 2025-08-14T22:03:19.0269026Z http.https://github.com/.extraheader 2025-08-14T22:03:19.0305640Z Entering 'third_party/cutlass' 2025-08-14T22:03:19.0342373Z http.https://github.com/.extraheader 2025-08-14T22:03:19.0393929Z Entering 'third_party/fbgemm' 2025-08-14T22:03:19.0428184Z http.https://github.com/.extraheader 2025-08-14T22:03:19.0470644Z Entering 'third_party/fbgemm/external/asmjit' 2025-08-14T22:03:19.0507974Z http.https://github.com/.extraheader 2025-08-14T22:03:19.0550742Z Entering 'third_party/fbgemm/external/composable_kernel' 2025-08-14T22:03:19.0591402Z http.https://github.com/.extraheader 2025-08-14T22:03:19.0629836Z Entering 'third_party/fbgemm/external/cpuinfo' 2025-08-14T22:03:19.0672519Z http.https://github.com/.extraheader 2025-08-14T22:03:19.0710806Z Entering 'third_party/fbgemm/external/cutlass' 2025-08-14T22:03:19.0741530Z http.https://github.com/.extraheader 2025-08-14T22:03:19.0790546Z Entering 'third_party/fbgemm/external/googletest' 2025-08-14T22:03:19.0825049Z http.https://github.com/.extraheader 2025-08-14T22:03:19.0859792Z Entering 'third_party/fbgemm/external/hipify_torch' 2025-08-14T22:03:19.0895810Z http.https://github.com/.extraheader 2025-08-14T22:03:19.0933309Z Entering 'third_party/fbgemm/external/json' 2025-08-14T22:03:19.0974144Z http.https://github.com/.extraheader 2025-08-14T22:03:19.1017139Z Entering 'third_party/flash-attention' 2025-08-14T22:03:19.1053036Z http.https://github.com/.extraheader 2025-08-14T22:03:19.1093530Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-08-14T22:03:19.1128458Z http.https://github.com/.extraheader 2025-08-14T22:03:19.1167797Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-08-14T22:03:19.1199218Z http.https://github.com/.extraheader 2025-08-14T22:03:19.1246673Z Entering 'third_party/flatbuffers' 2025-08-14T22:03:19.1284784Z http.https://github.com/.extraheader 2025-08-14T22:03:19.1324489Z Entering 'third_party/fmt' 2025-08-14T22:03:19.1364115Z http.https://github.com/.extraheader 2025-08-14T22:03:19.1395857Z Entering 'third_party/gemmlowp/gemmlowp' 2025-08-14T22:03:19.1434274Z http.https://github.com/.extraheader 2025-08-14T22:03:19.1472603Z Entering 'third_party/gloo' 2025-08-14T22:03:19.1512327Z http.https://github.com/.extraheader 2025-08-14T22:03:19.1550086Z Entering 'third_party/googletest' 2025-08-14T22:03:19.1580076Z http.https://github.com/.extraheader 2025-08-14T22:03:19.1614104Z Entering 'third_party/ideep' 2025-08-14T22:03:19.1651190Z http.https://github.com/.extraheader 2025-08-14T22:03:19.1689527Z Entering 'third_party/ideep/mkl-dnn' 2025-08-14T22:03:19.1724456Z http.https://github.com/.extraheader 2025-08-14T22:03:19.1766395Z Entering 'third_party/ittapi' 2025-08-14T22:03:19.1805306Z http.https://github.com/.extraheader 2025-08-14T22:03:19.1838275Z Entering 'third_party/kineto' 2025-08-14T22:03:19.1882839Z http.https://github.com/.extraheader 2025-08-14T22:03:19.1917223Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-08-14T22:03:19.1949990Z http.https://github.com/.extraheader 2025-08-14T22:03:19.1987744Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-08-14T22:03:19.2024538Z http.https://github.com/.extraheader 2025-08-14T22:03:19.2061220Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-08-14T22:03:19.2097636Z http.https://github.com/.extraheader 2025-08-14T22:03:19.2137455Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-08-14T22:03:19.2172603Z http.https://github.com/.extraheader 2025-08-14T22:03:19.2215161Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-08-14T22:03:19.2253261Z http.https://github.com/.extraheader 2025-08-14T22:03:19.2309064Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-08-14T22:03:19.2346588Z http.https://github.com/.extraheader 2025-08-14T22:03:19.2388021Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-08-14T22:03:19.2425308Z http.https://github.com/.extraheader 2025-08-14T22:03:19.2461343Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-08-14T22:03:19.2503063Z http.https://github.com/.extraheader 2025-08-14T22:03:19.2539989Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-08-14T22:03:19.2578522Z http.https://github.com/.extraheader 2025-08-14T22:03:19.2618583Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-08-14T22:03:19.2652003Z http.https://github.com/.extraheader 2025-08-14T22:03:19.2691711Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-08-14T22:03:19.2726716Z http.https://github.com/.extraheader 2025-08-14T22:03:19.2772616Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-08-14T22:03:19.2808568Z http.https://github.com/.extraheader 2025-08-14T22:03:19.2844302Z Entering 'third_party/kleidiai' 2025-08-14T22:03:19.2880784Z http.https://github.com/.extraheader 2025-08-14T22:03:19.2917858Z Entering 'third_party/mimalloc' 2025-08-14T22:03:19.2954597Z http.https://github.com/.extraheader 2025-08-14T22:03:19.2989241Z Entering 'third_party/nlohmann' 2025-08-14T22:03:19.3021699Z http.https://github.com/.extraheader 2025-08-14T22:03:19.3060253Z Entering 'third_party/onnx' 2025-08-14T22:03:19.3097022Z http.https://github.com/.extraheader 2025-08-14T22:03:19.3146800Z Entering 'third_party/onnx/third_party/pybind11' 2025-08-14T22:03:19.3182253Z http.https://github.com/.extraheader 2025-08-14T22:03:19.3228097Z Entering 'third_party/opentelemetry-cpp' 2025-08-14T22:03:19.3265746Z http.https://github.com/.extraheader 2025-08-14T22:03:19.3306697Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-08-14T22:03:19.3338990Z http.https://github.com/.extraheader 2025-08-14T22:03:19.3378535Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-08-14T22:03:19.3418958Z http.https://github.com/.extraheader 2025-08-14T22:03:19.3456636Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-08-14T22:03:19.3494363Z http.https://github.com/.extraheader 2025-08-14T22:03:19.3529698Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-08-14T22:03:19.3561722Z http.https://github.com/.extraheader 2025-08-14T22:03:19.3606216Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-08-14T22:03:19.3636158Z http.https://github.com/.extraheader 2025-08-14T22:03:19.3679399Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-08-14T22:03:19.3715053Z http.https://github.com/.extraheader 2025-08-14T22:03:19.3759513Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-08-14T22:03:19.3791177Z http.https://github.com/.extraheader 2025-08-14T22:03:19.3829105Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-08-14T22:03:19.3866264Z http.https://github.com/.extraheader 2025-08-14T22:03:19.3907021Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-08-14T22:03:19.3940054Z http.https://github.com/.extraheader 2025-08-14T22:03:19.3992262Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-08-14T22:03:19.4027478Z http.https://github.com/.extraheader 2025-08-14T22:03:19.4083418Z Entering 'third_party/pocketfft' 2025-08-14T22:03:19.4119862Z http.https://github.com/.extraheader 2025-08-14T22:03:19.4154494Z Entering 'third_party/protobuf' 2025-08-14T22:03:19.4199546Z http.https://github.com/.extraheader 2025-08-14T22:03:19.4235767Z Entering 'third_party/protobuf/third_party/benchmark' 2025-08-14T22:03:19.4272332Z http.https://github.com/.extraheader 2025-08-14T22:03:19.4312026Z Entering 'third_party/protobuf/third_party/googletest' 2025-08-14T22:03:19.4350549Z http.https://github.com/.extraheader 2025-08-14T22:03:19.4398578Z Entering 'third_party/psimd' 2025-08-14T22:03:19.4430711Z http.https://github.com/.extraheader 2025-08-14T22:03:19.4469742Z Entering 'third_party/pthreadpool' 2025-08-14T22:03:19.4507835Z http.https://github.com/.extraheader 2025-08-14T22:03:19.4542745Z Entering 'third_party/pybind11' 2025-08-14T22:03:19.4580435Z http.https://github.com/.extraheader 2025-08-14T22:03:19.4614658Z Entering 'third_party/python-peachpy' 2025-08-14T22:03:19.4656242Z http.https://github.com/.extraheader 2025-08-14T22:03:19.4694884Z Entering 'third_party/sleef' 2025-08-14T22:03:19.4732186Z http.https://github.com/.extraheader 2025-08-14T22:03:19.4770090Z Entering 'third_party/tensorpipe' 2025-08-14T22:03:19.4805539Z http.https://github.com/.extraheader 2025-08-14T22:03:19.4839423Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-08-14T22:03:19.4879131Z http.https://github.com/.extraheader 2025-08-14T22:03:19.4913712Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-08-14T22:03:19.4949832Z http.https://github.com/.extraheader 2025-08-14T22:03:19.4991211Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-08-14T22:03:19.5024847Z http.https://github.com/.extraheader 2025-08-14T22:03:19.5059018Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-08-14T22:03:19.5096526Z http.https://github.com/.extraheader 2025-08-14T22:03:19.5134055Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-08-14T22:03:19.5176114Z http.https://github.com/.extraheader 2025-08-14T22:03:19.5321247Z A job completed hook has been configured by the self-hosted runner administrator 2025-08-14T22:03:19.5334176Z ##[group]Run '/home/ec2-user/runner-scripts/after_job.sh' 2025-08-14T22:03:19.5337677Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-08-14T22:03:19.5337949Z ##[endgroup] 2025-08-14T22:03:19.5427929Z [!ALERT!] Swap in detected! [!ALERT!] 2025-08-14T22:03:29.1770319Z [!ALERT!] Swap out detected [!ALERT!] 2025-08-14T22:03:45.2014376Z Cleaning up orphan processes